Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations IDS on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Minimum reliable SPI clock speed MPC3008 ADC

Mark_B

Mechanical
Jan 21, 2024
25
I am setting up a circuit for reading a 10k type 3 thermistor (10kohms). I have an MPC3008 ADC I will be interfacing with an AVR/Atmega644P microcontroller. Why does Microchip recommend higher clock speeds for higher Vdd? I came to believe higher operating voltage allows higher ADC clock speed, but can't understand why it requires it. Or maybe their choice to place table 6-1 next to the paragraph about minimum clock speeds is just confusing me?

How much do I need to worry about these things when reading a thermistor for comfort HVAC applications where a 0.25* misreading is totally acceptable? I'm interested in relevant math.



The datasheet, regarding SPI clock speed:


"Figure 4-2: Maximum Clock Frequency vs Input resistance (RS) to maintain less than a 0.1 LSB deviation in INL from nominal conditions." At my input resistance of 10kohms and Vdd of 5V, it shows a clock speed around 0.75 Mohms. I take this to mean if I set SPI clock speed around 0.75 MHz, that will avoid any accuracy issues.


Later in the sheet:

Page 22, Maintaining minimum clock speed: "the time between the end of the sample period and the time that all 10 data bits have been clocked out must not exceed 1.2 ms (effective clock frequency of 10 kHz)" (page 22, "maintaining minimum clock speed"). Again, this seems to suggest my clock speed will be fine.

But then this part comes along:

Table 6-1: Shows recommended clock speed increasing to 3.6MHz with Vdd greater than 4V.



Here is the datasheet:

 
Last edited:
Replies continue below

Recommended for you

That chip uses the SPI clock to generate the sample clock for the ADC.

> The MCP3004/3008 A/D converters employ a conventional SAR architecture. With this architecture, a sample is acquired on an internal sample/hold capacitor for 1.5 clock cycles starting on the first rising edge of the serial clock once CS has been pulled low.

The ADC functions with a sample-and-hold circuit where a capacitor is charged from the input, then converted to a digital output at the end of the sample period.

> When the MCP3004/3008 initiates the sample period, charge is stored on the sample capacitor. When the sample period is complete, the device converts one bit for each clock that is received. It is important or the user to note that a slow clock rate will allow charge to bleed off the sample capacitor while the conversion is taking place.

> At temperatures above +85°C, the part will maintain proper charge on the sample capacitor for at least 1.2 ms after the sample period has ended. This means that the time between the end of the sample period and the time that all 10 data bits have been clocked out must not exceed 1.2 ms (effective clock frequency of 10 kHz). Failure to meet this criterion may introduce linearity errors into the conversion outside the rated specifications.

So from this you can know that the sample clock must be high enough to sample 10 data bits in 1.2ms.

My guess is simply that the sample capacitor discharge is the issue. For some reason the leakage path that's discharging the sample capacitor has a resistance that decreases as Vdd increases. IC designers tend to use transistors as resistors because they take less die area and can be more repeatable, I'm guessing that something in how they did this here caused a voltage dependence. That's not great design practice, but it was probably cheaper. If you take the usual `V_{c}=V_{0}e^(\frac{-t}{RC})` and make R a function inversely proportional to Vdd you'd get the described behavior.
 
So from this you can know that the sample clock must be high enough to sample 10 data bits in 1.2ms.

It seems 0.75MHz SPCLK, or ~41,667Hz sample rate, is more than enough.

Thank you for taking the time to break that down.
 
My guess is simply that the sample capacitor discharge is the issue. For some reason the leakage path that's discharging the sample capacitor has a resistance that decreases as Vdd increases.

Losses in capacitive sample and hold are often due to junction and transistor leakages, which all increase vs. temperature; in some cases, they double for every 10C rise in junction temperature. Insignificant leakages at RT can become huge at 85C. This is coupled by reductions in transistor gains at temperature, which manifests as if the sampling capacitor is being charged by effectively higher transistor channel resistance at high temperatures.

This is all somewhat moot for a system that's essentially operating only at room temperatures, unless your electronics are running ridiculously hot, since the highest operational interior temperatures are going to be less than 50C.
 

Part and Inventory Search

Sponsor