Traditional PLCs are migrating towards traditional DCSs, and vice versa.
Honeywell PlantScape/Experion is an example of a DCS moving towards a PLC.
Fisher Delta V is an example of a PLC moving towards a DCS.
In a small or medium plant, where there are about 100-200 I/Os total, many people use a PLC (Allen-Bradley), and many use DCS (Delta V, PlantScape).
By the way, the SCADA is just that, supervisory and data acquisition. I don't usually see SCADA within a plant. I usually see it used to supervise multiple small sites. In my industry, well monitoring is often done by SCADA systems, with little RTUs at each well site.
If you have a small plant, I suggest you talk to a PLC and DCS vendor. They ALL have a system/solution starting at 5 I/Os and going up to 500 I/Os. They all have pretty much the same stuff. The difference may be the industry and application specific experience of the office nearest you. Oh, and price, but these change daily.
"Do not worry about your problems with mathematics, I assure you mine are far greater."
Albert Einstein
Have you read FAQ731-376 to make the best use of Eng-Tips Forums?
The acronyms are useless for product comparison. All three (DCS, PLC and SCADA) were coined around the 1970’s when integrated circuits were the rage replacing transistor components and a 16-bit microprocessor was the leading edge of technology.
The PLC continues to use ladder logic among programming choices. The function block programming or configuration is preferred for PID style control. SCADA implies remote access and may be more software intensive than the typical PLC or DCS application. These days the operator interface for all is likely to be a Windows based graphical display. In the 1970’s the PLC used pushbuttons, lights, thumbwheels and BCD digital displays. The DCS started with a graphical display, mostly with face-plates that resembled 3” X 6” style dedicated controllers. SCADA was typically a graphical display but perhaps a light mimic style graphic panel with leased telephone lines to the remote sites. The Fisher ROC is a modern remote terminal unit (RTU) for the I/O and local control. SCADA originally used mini-computers like the DEC PDP-8, PDP-11 or IBM 1800. Let's Google those machines to see what archives are available.
"Do not worry about your problems with mathematics, I assure you mine are far greater."
Albert Einstein
Have you read FAQ731-376 to make the best use of Eng-Tips Forums?
In our industry, water and wastewater, most plants have a SCADA system to interface with the plant control system. Typical vendors include Intellution, Wonderware, CiTect, and Rockwell Automation.
Well technically the SCADA is the system, PLCs or DCS' are the hardware platform on which the SCADA system is built. As mentioned by others, history has more to do with it than anything else. PLCs were developed to handle (relatively) high speed discrete digital I/O, high speed counters, timers etc. with the idea of machine control and repeat precision in mind. DCS systems were developed to handle multiple loops of analog control, cascading loops and higher integer intensive math processes, even though reaction speed may be lost because in the process control industry, things rarely happened in fractions of a second. DCS' were also the first to come up with graphical user interfaces.
Now, PLCs are able to handle analog I/O better than they used to, and processor speed has improved to where DCS' can handle higher speed I/O better than they used to. GUI systems as mentioned above, i.e. Intellution, Wonderware etc. eliminated that advantage to DCS systems, but it does still depend on the size of the system, redundancy requirements etc. DCS based systems have a long history of successful operation in very large systems that AFAIK, PLCs with GUIs have not proven yet.
Still, for a lot of applications, the hardware platform makes little difference, other than the familiarity of the users and technicians. For instance, it you have a waste water district that uses PLCs for pump station controls and wants to put together an overall SCADA system to look at their entire district as a whole, it would make sense to use the same PLCs they are currently using as the platform for the SCADA system. On the other hand, if a chemical plant is not using PLCs for anything now, the analog intense application would warrant a DCS as a better solution, and maybe tie in a few PLC-like functions where necessary.
Eng-Tips: Help for your job, not for your homework Read faq731-376
Most DCS platforms use dedicated hardware controllers with one or more redundant partner processors synchronously running the same application code. If one processor dies, the other is at exactly the same point in the execution cycle and (in theory) seamlessly takes over control. The transfer is pretty reliable, but like all things designed by mortal man it is not perfect.
The PC's typically provide the user's 'window' into the DCS where a PLC would be likely to employ a dedicated HMI rather than the PC executing the actual control program. Even here the line between DCS and PLC is blurring as the newer dedicated HMIs are often PC-based running the odious Windows CE.
In a larger DCS there will be several dedicated workstations for Operator Interfaces, plus a Data Historian, an Engineering station, etc, sharing common access to the system whereas most PLCs still have a single HMI. Until recently Sun SparcStations were highly favoured in the process and power industries for their stability and reliability. Some of our Sparcs were not rebooted in a two year period between generating unit outages, and very rarely suffered any problems. Windows has improved since the days of the Blue Screen of Death, but still comes nowhere close to Solaris for stability. I guess I'm a Unix die-hard because reliable hardware and stable software makes my life easier and I like the fact that I am in control in Unix, not Bill and the sodding Wizards which think they know better than me and that damned paper clip and... long live Solaris!
The sparcs connect via a hardware interface to the DCS data highway. The actual I/O is hosted on a controller running the application software. That's i486 based on the MDX processor for the WDPF system, not sure what processor is in Foxboro's I/A CP60.
Your cost estimate is on the low side, but yes, very expensive.
Marketing drives each of these system hardware and software platforms. When Windows NT was sufficiently stable for CAD stations, the public indicated a willingness, or desire to use commercially available open system hardware and software instead of the propriety networks and systems of the previous era. In the early days of electronic controls the hand-held radios generated output spikes in the analog controls. Great concern existed about radios permitted in the "host computer room". With reasonable grounding practices, few are concerned about radios today.
Among the differences between the systems in the previous decades, PLC manufacturers shipped hardware in a box for the customer or others to integrate. This was typically discrete contact logic, timers and counters to activate motors or solenoids. These were often sold to the industry as relay replacement systems.
SCADA systems were often a software package integrated with other company hardware as a system. These covered the remote pipeline and unmanned utility pumping station control via a central dispatch center.
The DCS manufacturers staged their hardware and software as a system with the cabinets, assembly, wiring, testing and optional application programming among the services provided. These were the replacement for single-loop PID controllers and analog indicators. Customers were continuous process plants for water, waste water, boiler, refinery, chemical etc. Customers often required intrinsic safety. This required a systems approach to comply with the codes and standards.
Again, the use of 64-bit microprocessors with graphical workstations permit all of the features of each of these systems regardless of the previous differences.
"I guess I'm a Unix die-hard because reliable hardware and stable software makes my life easier and I like the fact that I am in control in Unix, not Bill and the sodding Wizards which think they know better than me and that damned paper clip and... long live Solaris!"
I had a customer in an RBOC (telephone guys) who used our
hp-ux stuff. This was about 8 years ago now.
Anyway, he mentioned that he had been doing odd job
apps for his company using linux. Since telco apps are
required for 99.9999% uptime (or less than 3 minutes app
down per year), I got to thinking about linux. Been using
Linux for about 6 years now (maybe a little longer. Red
Hat version 5.2)....
I've been running a server for the past couple of years
with a standard linux distro. There was one time where
I actually had a problem, other than me doing stupid things
like pulling the power plug, and UPS batteries dying.
This is with a standard COTS (consumer off the shelf)
PC.
I used to think that problems with this sort of system
would be 1/2 hardware faults (memory hits, etc) and 1/2
with operating system. I've changed my mind.
The PC hardware is remarkably hardy! The difference between
MTBF of a consumer grade system and a "robust" commercial
system is quite a matter of specsmanship as anyone knows
who has to generate MTBF figures........
The one app problem was one time doing a file backup over
NFS. Hasn't repeated itself. Glitch in LAN?
The COTS system is a PC that someone "threw away" at me.
"Too slow" for newer windoze operating system. Works fine
for me!
I have changed from a die-hard Unix fan to a die-hard
'nix fan. Since it's a 'nix, all my Solaris commands work
fine and I finally got used to using:
ls -l
rather than
ll
in hp-ux. That was the hardest thing to (un)learn.
Of course, Solaris is available (for free) for the PC
nowadays. Should also mention netBSD and BSD 'nixes that
are freely available. I'm guessing that peripheral drivers
for the newer hardware will first be implemented on Linux
(other than windoze). Not that that is any great whoopie.
Then there is the area of SMP (symetical multi processor)
and 64 bit versions of CPUs.
I "feel" that it is a good idea to "learn" Linux. Not
that there is really anything to "learn". However, if
you play with it a bit, you can put it on your resume,
right next to hp-ux, solaris, aix, etc. Helps when you
are dealing with Human Resources when looking for a job,
or PHBs (Pointy Haired Bosses from the Dilbert comic
strip) who don't know the lack of difference between the
operating systems and will only "key in" on their
operating system of choice.
First
---------
PLC: Logic scanning is contineous and scan time defines the overall time to execute the full logic on the controller.
DCS: Logic execution is modular and scantime is defineable for each module. so there is no overall scan time, logic execution is based on process requirements (fast or slow loops)
Second
--------
PLC: The control of PLC is not ditributed throughout the network, its centrally controlled. So probability of total failure is greater and also maintenance is difficult.
DCS: The control is distributed and failure of one node has partial effect on process. cprobability of process failure is less. maitenance is easy and features like hot swapping of DCS equipments gives more flexibility to maintenance people.
One HUGE consideration is do you need true on-line programming where you are able to change the program without shutting down the program to recompile or whatever.
Many PC based controllers need to do this. Most higher end PLC's allow on-line changes.
somnathchakraborty suggests that a PLC is not ditributed but is centrally controlled. However, today's DCS is not very distributed either. When Honeywell used the term distributed in 1975 they had a controller with eight simple loops. By 1985 you could buy a Fisher Provox batch controller capable of 500 simple loops.
The maitenance and on-line programming change comments are still applicable.
>PLC: Logic scanning is contineous and scan time defines the overall time to execute the full logic on the controller.
Not really, most PLC have time based interrupts and you can run for example PID loops from there. So scantime IS defineable. It's all a matter of the software design.
DCS: Logic execution is modular and scantime is defineable for each module. so there is no overall scan time, logic execution is based on process requirements (fast or slow loops)
Not really either, most DCS's do have a scan time, sometimes a very slow one. Again you have to design your application correctly.
>PLC: The control of PLC is not ditributed throughout the network, its centrally controlled. So probability of total failure is greater and also maintenance is difficult.
Not true. In a network of PLC's each is running it's own program, just like a DCS.
I almost think that a DCS is a PLC/SCADA with a bigger marketing budget.
Most if not all DCS manufacturers sell a solution, including hardware, software, commissioning, and after-market support. PLC vendors sell an empty processor and a pile of bits to be integrated by a third-party vendor. The DCS vendor has usually spent a substantial amount of time and money on proving the interoperability of all the components of their system, including processors, I/O, MMI, Historian, etc. This level of testing is much less common in a PLC system because it is typically performed by the system integrator on a case-by-case basis where the cost of very detailed testing can not be justified for a one-off job. The chances of an unpredicted or unforseen condition arising in a PLC system is therefore higher.
The technical capabilities of high-end PLC hardware is pretty close to that of a mid-range DCS.
"Do not worry about your problems with mathematics, I assure you mine are far greater."
Albert Einstein
Have you read FAQ731-376 to make the best use of Eng-Tips Forums?