Hi Ken, thank you for your reply and sorry for the a bit short explanation, but a complete description would be very long, so I try to make the problem as clear as possible.
I already use a Verilog-AMS module that determine the zero crossing with the crossing function. The resolution is set by me to round a about 100fs. The problem is, that the transient simulation itself seems to be inaccurate in the scale of several hundret fs to ps.
A clear sign was, that the default conservative settings lead to the problem, that two simulation runs, that only differs in an additional completly independent dummy circuit (inverter or something like that at ideal sources), lead to results with an correlation factor of only 0.296, what mean, that they are really nearly uncorrelated. If I set the maxstep size to 100fs, the DNL and INL error show the expected behavior. What means, that simulations of the same input vectors lead to the same result, if the static drift is eliminated. The only drawbacks are, that the required simulation time exceed 120h and that the output files are several hundred GB large. This make it more or less impossible to view the waveforms. Even the relaxed simulator settings lead to PSF files with >10GB for only several signals.
I have already read
http://www.kenkundert.com/docs/bctm98-MSsim.pdf My observations lead to the opinion, that the "Local Truncation Error" is the source of the simulator inaccuracy. With the default conservative settings, Spectre use sometimes step sizes of more than 10 ps. This seems to be too much, because the slew rate is relative low to enable an interpolation with only four input phases. Therefore, the complete waveform of a period is important. This should be also the reason, why even an unlimited crossing resolution would lead to inaccurate results of the determined phase shift/rotation/delay between input and output clocks.
Therefore, i have really big problems to perform jitter analysis, while i have pure simulator variations that are in the range of 0.5-1 LSB. For the DNL and INL error, i know how the results should look like, but for jitter, i have no idear. I have feedback loops with -3dB bandwidths of ~125 MHz (Input shaping) and ~1GHz (active regulated current mirrors). I have absolutely no idea, how they effect the power supply noise induced jitter all together.
With PXF i have the problem, that the variation of the operation point for the current mirrors is quite large (>0.3V), and also the current through the Mixer transistors vary by a factor of 64 over the input vectors. Therefore, i dont really know, if a can perform PXF as an small signal analysis, because the crossing of the clock signals depent on large signal properties/waveform.
I am really perplexed. When i started the design process, i thought, that 1.5 ps phase resolution is challenging, but not really difficult, if you know what you do. I was a bit afraid from mismatch/corners and noise, but not from the simulation itself. There are 10GBit/s and more phys that use phase interpolation. How can they simulate phase interpolators for even higher frequencies with high confidence, that the design meet the requirements for very low BERs under alls circumstances? I can't believe, that they fabricate test chips to determine the final behavior, because the simulations are not accurate enough or too time consuming... Therefore, where is my mistake?