Académique Documents
Professionnel Documents
Culture Documents
INTRODUCTION
1.1 COMPANY PROFILE
Merino laminates market presence in over 60 countries and their extensive quality is
making multi-products with a world class standards .Merino Laminates the worlds
leading manufacturer and exporter of decorative laminates for the interiors segment.
They showcase a range of world class, premium laminates with more than 10,000
designs, textures, colors and finishes. Complementary products from the Merino
Group include plywood, melamine-faced particle board and post-formed panels for
the interiors industry. They have committed themselves to uphold the highest
manufacturing standards, a practice that has earned all our facilities pertinent
certifications that include ISO 9001, ISO 14001 and OHSAS 18001.
The Merino Laminates brand came into existence in 1981, when the Merino Group
extended its activity to laminates manufacturing, having entered the interiors segment
with Plywood in 1974. About the Merino Group: Founded in 1968, the Merino Group
is today a US$ 165 million group with diverse business interests which include Panel
& Panel Products, Biotechnology (Agriculture & Food Processing) and Information
Technology (IT). They are driven by their constant effort to maintain Economy,
Excellence and Ethics in all our businesses. They export to 60 countries around the
globe, and employ 3000 people across three manufacturing sites, 19 offices in India
and an office in the U.S.
Fig 1.1 Merino panels and products ltd (ncr,delhi- rohtak road)
Fig 1.1 shows the location of the industry, In Aug.'94, the company set up a 6000 tpa
formaldehyde manufacturing plant as a backward integration. Formaldehyde is used
for the preparation of resins which is required in the manufacture of laminates. In
1994-95, the installed capacity of the laminating plant was increased from 42.50 lac
sq mtr to 80 lac sq mtr. In Sep.'95, it came out with a public issue to part-finance the
expansion-cum-modernisation programme involving the laminating capacity increase
2
from 80 lac sq mtr to 108 lac sq mtr. During 2000-2001 company obtained ISO9002:1994 certification from DNV the Netherlands, in its branches at Kolkata,
Mumbai, Bangalore, Chennai, Delhi, Pune, Nagpur and Ahmedabad. The Installed
Capacity of Decorative laminates has increased from 80 lac Sq Mtrs to 108 lac Sq
Mtrs. In 2001-02 the installed capacity of Decorative laminates was increased to 167
lacs Sq Mtrs.
1.5
As shown in Fig 1.3 merino multiple products are shown which are manufactured at
their various units, various laminates types described above are used in interior
solutions and also the food products are launched by them in market which are
showing good results.Some quality standards of laminates are mentioned below:
ANTI-BACTERIAL: Antibacterial properties are important for decorative laminates
because these laminates are used as kitchen tops and counter tops, cabinets and table
tops that may be in constant contact with food materials and younger children.
Antibacterial properties are there to ensure that bacterial growth is minimal. One of
the standards for Anti-Bacterial is the ISO 22196:2007, which is based on
the Japanese Industrial Standards (JIS), code Z2801. This is one of the standards most
often referred to in the industry with regards to tests on microbial activities
(specifically bacteria) and in the JIS Z2801, two bacteria species are used as a
standard, namely E.Coli and Staphylococcus aureus. However, some companies may
have the initiative to test more than just these two bacteria and may also replace
Staphylococcus aureus with MRSA, the methicillin-resistant version of the same
bacteria. Again, different countries may choose to specify different types of microbes
for testing especially if they identified some bacteria groups which are more
intimidating in their countries due to specific reasons.
5
In Fig 1.4 there are various departments of industry are shown and at every level
decentralization mechanism is followed for achieving the industrial goals, focused
section was electronics and electrical which is explained further.
CHAPTER-2
AUTOMATION AND INSTRUMENTATION
2.1
INTRODUCTION OF AUTOMATION
2.2
Unlike general-purpose computers, the PLC is designed for multiple inputs and output
arrangements, extended temperature ranges, immunity to electrical noise, and
resistance to vibration and impact. Programs to control machine operation are
typically stored in battery-
backed-up or non-volatile
memory.
A PLC is an
must
otherwise
unintended
be
produced
in
for
automobiles
was
mainly
manufacturing
timers, drum sequencers, and dedicated closed-loop controllers. Since these could
number in the hundreds or even thousands, the process for updating such facilities for
the yearly model change-over was very time consuming and expensive, as electricians
needed to individually rewire relays to change the logic. Digital computers, being
9
10
11
and networking. The data handling, storage, processing power and communication
capabilities of some modern PLCs are approximately equivalent to desktop
computers. PLC-like programming combined with remote I/O hardware, allow a
general-purpose desktop computer to overlap some PLCs in certain applications.
Regarding the practicality of these desktop computer based logic controllers, it is
important to note that they have not been generally accepted in heavy industry
because the desktop computers run on less stable operating systems than do PLCs,
and because the desktop computer hardware is typically not designed to the same
levels of tolerance to temperature, humidity, vibration, and longevity as the processors
used in PLCs.
12
HMIs
also
referred to as man-machine
(MMIs)
simple
available as well as
software installed on
connected
communication interface.
are
interfaces
(GUIs).
via
13
Under the IEC 61131-3 standard, PLCs can be programmed using standardsbased programming languages. A graphical programming notation called
Sequential Function Charts is available on certain programmable controllers.
Initially most PLCs utilized Ladder Logic Diagram Programming, a model
which emulated electromechanical control panel devices (such as the contact
and coils of relays) which PLCs replaced. This model remains common today.
Fig2.2 shows control panel with PLC (grey elements in the center).The unit
consists of separate elements, from left to right; power supply, controller, relay
units for in- and output.
so
the
cost
(design
of
development
power supplies,
input/output
hardware
necessary testing
and
certification) can
be spread over
user
would
not
control.
Automotive
applications are an
example;
and
millions of units are built each year, and very few end-users alter the programming of
these controllers. However, some specialty vehicles such as transit buses
economically use PLCs instead of custom-designed controls, because the volumes are
low and the development cost would be uneconomical.
Very complex process control, such as used in the chemical industry, may require
algorithms and performance beyond the capability of even high-performance PLCs.
Very high-speed or precision controls may also require customized solutions; for
example, aircraft flight controls. Single-board computers using semi-customized or
fully proprietary hardware may be chosen for very demanding control applications
where the high development and maintenance cost can be supported. "Soft PLCs"
running on desktop-type computers can interface with industrial I/O hardware while
15
2.3
buildings, airports, ships, and space stations. They monitor and control
heating, ventilation, and air conditioning systems (HVAC), access, and energy
consumption.
Common system components, A SCADA system usually consists of the following
subsystems:
a) A humanmachine interface or HMI is the apparatus or device which presents
16
processed data to a human operator, and through this, the human operator
monitors and controls the process.
b) SCADA is used as a safety tool as in lock-out tag-out
c) A supervisory (computer) system, gathering (acquiring) data on the process
and sending commands (control) to the process.
d) Remote terminal units (RTUs) connecting to sensors in the process, converting
sensor signals to digital data and sending digital data to the supervisory
system.
17
e) Programmable logic controller (PLCs) used as field devices because they are
more economical, versatile, flexible, and configurable than special-purpose
RTUs.
f) Communication infrastructure connecting the supervisory system to the
remote terminal units.
g) Various process and analytical instrumentation.
The term SCADA usually refers to centralized systems as shown in Fig 2.4 which
monitor and control entire sites, or complexes of systems spread out over large areas
(anything from an industrial plant to a nation). Most control actions are performed
automatically by RTUs or by PLCs. Host control functions are usually restricted to
basic overriding or supervisory level intervention. For example, a PLC may control
the flow of cooling water through part of an industrial process, but the SCADA
system may allow operators to change the set points for the flow, and enable alarm
conditions, such as loss of flow and high temperature, to be displayed and recorded.
The feedback control loop passes through the RTU or PLC, while the SCADA system
monitors the overall performance of the loop.
Data acquisition begins at the RTU or PLC level and includes meter readings and
equipment status reports that are communicated to SCADA as required. Data is then
compiled and formatted in such a way that a control room operator using the HMI can
make supervisory decisions to adjust or override normal RTU (PLC) controls. Data
may also be fed to an Historian, often built on a commodity Database Management
18
System, to allow trending and other analytical auditing. SCADA systems typically
implement a distributed database, commonly referred to as a tag database, which
contains data elements called tags or points. A point represents a single input or output
value monitored or controlled by the system. Points can be either "hard" or "soft". A
hard point represents an actual input or output within the system, while a soft point
results from logic and math operations applied to other points. (Most implementations
conceptually remove the distinction by making every property a "soft" point
expression, which may, in the simplest case, equal a single hard point.) Points are
normally stored as value-timestamp pairs: a value, and the timestamp when it was
recorded or calculated. A series of value-timestamp pairs gives the history of that
point. It is also common to store additional metadata with tags, such as the path to a
field device or PLC register, design time comments, and alarm information. SCADA
systems are significantly important systems used in national infrastructures such as
electric grids, water supplies and pipelines. However, SCADA systems may have
security vulnerabilities, so the systems should be evaluated to identify risks and
solutions implemented to mitigate those risks.
Humanmachine interface: A humanmachine interface or HMI is the apparatus
which presents process data to a human operator, and through which the human
operator controls the process. HMI is usually linked to the SCADA system's databases
and software programs, to provide trending, diagnostic data, and management
information such as scheduled maintenance procedures, logistic information, detailed
schematics for a particular sensor or machine, and expert-system troubleshooting
guides. The HMI system usually presents the information to the operating personnel
graphically, in the form of a mimic diagram. This means that the operator can see a
schematic representation of the plant being controlled. For example, a picture of a
pump connected to a pipe can show the operator that the pump is running and how
much fluid it is pumping through the pipe at the moment. The operator can then
switch the pump off. The HMI software will show the flow rate of the fluid in the pipe
decrease in real time. Mimic diagrams may consist of line graphics and schematic
symbols to represent process elements, or may consist of digital photographs of the
process equipment overlain with animated symbols.
The HMI package for the SCADA system typically includes a drawing program that
19
the operators or system maintenance personnel use to change the way these points are
represented in the interface. These representations can be as simple as an on-screen
traffic
light,
which
represents
the
state
an
actual
of
traffic
light
in
the
field,
or
as
complex
as a
multiprojector
display representing the position of all of the elevators in a skyscraper or all of the
trains on a railway. An important part of most SCADA implementations is alarm
handling. The system monitors whether certain alarm conditions are satisfied, to
determine when an alarm event has occurred. Once an alarm event has been detected,
one or more actions are taken (such as the activation of one or more alarm indicators,
and perhaps the generation of email or text messages so that management or remote
SCADA operators are informed). In many cases, a SCADA operator may have to
acknowledge the alarm event; this may deactivate some alarm indicators, whereas
other indicators remain active until the alarm conditions are cleared.
Alarm conditions can be explicitfor example, an alarm point is a digital status point
that has either the value NORMAL or ALARM that is calculated by a formula based
on the values in other analogue and digital pointsor implicit: the SCADA system
might automatically monitor whether the value in an analogue point lies outside high
and low limit values associated with that point.
20
The SCADA system as shown in Fig 2.5 is the example of water filtering system with
alarm. More examples of alarm indicators include a siren, a pop-up box on a screen,
or a coloured or flashing area on a screen (that might act in a similar way to the "fuel
tank empty" light in a car); in each case, the role of the alarm indicator is to draw the
operator's attention to the part of the system 'in alarm' so that appropriate action can
be taken. In designing SCADA systems with reference to [2] care must be taken when
a cascade of alarm events occurs in a short time, otherwise the underlying cause
(which might not be the earliest event detected) may get lost in the noise.
Supervisory station: The term supervisory station refers to the servers and software
responsible for communicating with the field equipment (RTUs, PLCs, SENSORS
etc.), and then to the HMI software running on workstations in the control room, or
elsewhere. In smaller SCADA systems, the master station may be composed of a
single PC. In larger SCADA systems, the master station may include multiple servers,
distributed software applications, and disaster recovery sites. To increase the integrity
of the system the multiple servers will often be configured in a dual-redundant or hotstandby formation providing continuous control and monitoring in the event of a
server failure.
21
Operational philosophy: For some installations, the costs that would result from the
control system failing are extremely high. Hardware for some SCADA systems is
ruggedized to withstand temperature, vibration, and voltage extremes. In the most
critical installations, reliability is enhanced by having redundant hardware and
communications channels, up to the point of having multiple fully equipped control
centres. A failing part can be quickly identified and its functionality automatically
taken over by backup hardware. A failed part can often be replaced without
interrupting the process. The reliability of such systems can be calculated statistically
and is stated as the mean time to failure, which is a variant of Mean Time Between
Failures (MTBF). The calculated mean time to failure of such high reliability systems
can be on the order of centuries.
Communication infrastructure and methods:
a) SCADA systems have traditionally used combinations of radio and direct
wired connections, although SONET/SDH is also frequently used for large
systems such as railways and power stations. The remote management or
monitoring function of a SCADA system is often referred to as telemetry. [2]
Some users want SCADA data to travel over their pre-established corporate
networks or to share the network with other applications. The legacy of the
early low-bandwidth protocols remains, though.
b) SCADA protocols are designed to be very compact. Many are designed to
send information only when the master station polls the RTU. With increasing
security demands (such as North American Electric Reliability Corporation
(NERC) and Critical Infrastructure Protection (CIP) in the US), there is
increasing use of satellite-based communication. This has the key advantages
that the infrastructure can be self-contained (not using circuits from the public
telephone system), can have built-in encryption, and can be engineered to the
availability and reliability required by the SCADA system operator. Earlier
experiences using consumer-grade VSAT were poor. Modern carrier-class
systems provide the quality of service required for SCADA. RTUs and other
automatic controller devices were developed before the advent of industry
wide standards for interoperability.
The United States Army's Training Manual 5-601 covers "SCADA Systems for
C4ISR Facilities". SCADA systems have evolved through 3 generations as follows:
First generation Monolithic: In the first generation, computing was done by
mainframe computers. Networks did not exist at the time SCADA was developed.
Thus SCADA systems were independent systems with no connectivity to other
systems. Wide Area Networks were later designed by RTU vendors to communicate
with the RTU. The communication protocols used were often proprietary at that time.
The first-generation SCADA system was redundant since a back-up mainframe
system was connected at the bus level and was used in the event of failure of the
primary mainframe system. Some first generation SCADA systems were developed as
"turn key" operations that ran on minicomputers like the PDP-11 series made by the
Digital Equipment Corporation (DEC). These systems were read only in the sense that
they could display information from the existing analog based control systems to
individual operator workstations but they usually didn't attempt to send control signals
to remote stations due to analog based telemetry issues and control center
management concerns with allowing direct control from computer workstations. They
would also perform alarming and logging functions and calculate hourly and daily
system commodity accounting functions.
Second generationDistributed: The processing was distributed across multiple
stations which were connected through a LAN and they shared information in real
time. Each station was responsible for a particular task thus making the size and cost
of each station less than the one used in First Generation. The network protocols used
were still mostly proprietary, which led to significant security problems for any
SCADA system that received attention from a hacker. Since the protocols were
proprietary, very few people beyond the developers and hackers knew enough to
determine how secure a SCADA installation was. Since both parties had vested
interests in keeping security issues quiet, the security of a SCADA installation was
often badly overestimated, if it was considered at all.
Third generation"Networked": Due to the usage of standard protocols and the fact
that many networked SCADA systems are accessible from the Internet, the systems
23
are potentially vulnerable to remote attack. On the other hand, the usage of standard
protocols and security techniques means that standard security improvements are
applicable to the SCADA systems, assuming they receive timely maintenance and
updates.
2.4
INTRODUCTION TO INSTRUMENTATION
25
2.5
CALIBRATION
less than 1/4 of the measurement uncertainty of the device being calibrated. When this
goal is met, the accumulated measurement uncertainty of all of the standards involved
is considered to be insignificant when the final measurement is also made with the 4:1
ratio. The test equipment being calibrated can be just as accurate as the working
standard. If the accuracy ratio is less than 4:1, then the calibration tolerance can be
reduced to compensate. When 1:1 is reached, only an exact match between the
standard and the device being calibrated is a completely correct calibration. Adjusting
the calibration tolerance for the gauge would be a better solution. If the calibration is
performed at 100 units, the 1% standard would actually be anywhere between 99 and
101 units. The acceptable values of calibrations where the test equipment is at the 4:1
ratio would be 96 to 104 units, inclusive. Changing the acceptable range to 97 to 103
units would remove the potential contribution of all of the standards and preserve a
3.3:1 ratio. Continuing, a further change to the acceptable range to 98 to 102 restores
more than a 4:1 final ratio. This is a simplified example. The mathematics of the
example can be challenged. It is important that whatever thinking guided this process
in an actual calibration be recorded and accessible. Informality contributes to
tolerance stacks and other difficult to diagnose post calibration problems. Also in the
example above, ideally the calibration value of 100 units would be the best point in
the gage's range to perform a single-point calibration. It may be the manufacturer's
recommendation or it may be the way similar devices are already being calibrated.
Multiple point calibrations are also used. Depending on the device, a zero unit state,
the absence of the phenomenon being measured, may also be a calibration point. Or
zero may be resettable by the user-there are several variations possible. Again, the
points to use during calibration should be recorded. There may be specific connection
techniques between the standard and the device being calibrated that may influence
the calibration. For example, in electronic calibrations involving analog phenomena,
the impedance of the cable connections can directly influence the result. All of the
information above is collected in a calibration procedure, which is a specific test
method.
These procedures capture all of the steps needed to perform a successful calibration.
The manufacturer may provide one or the organization may prepare one that also
captures all of the organization's other requirements. There are clearinghouses for
calibration procedures such as the Government-Industry Data Exchange Program
27
(GIDEP) in the United States. This exact process is repeated for each of the standards
used until transfer standards, certified reference material sand or natural physical
constants, the measurement standards with the least uncertainty in the laboratory, are
reached. [1]This establishes the traceability of the calibration. See Metrology for other
factors that are considered during calibration process development. After all of this,
individual instruments of the specific type discussed above can finally be calibrated.
The process generally begins with a basic damage check. Some organizations such as
nuclear power plants collect "as-found" calibration data before any routine
maintenance is performed. After routine maintenance and deficiencies detected during
calibration are addressed, an "as-left" calibration is performed. More commonly, a
calibration technician is entrusted with the entire process and signs the calibration
certificate, which documents the completion of a successful calibration.
A new instrument.
After an instrument has been repaired or modified.
When a specified time period has elapsed.
When a specified usage (operating hours) has elapsed, before and/or after a
e)
critical measurement.
After an event, for example. after an instrument has had a shock, vibration, or
has been exposed to an adverse condition which potentially may have put it
f)
g)
h)
the scale.
This is the perception of the instrument's end-user. However, very few
instruments can be adjusted to exactly match the standards they are compared
to. For the vast majority of calibrations, the calibration process is actually the
28
2.6
2.6.1 CALIBRATION
To characterize the R v/s T relationship of any RTD over a temperature range that
represents the planned range of use, calibration must be performed at temperatures
other than 0C and 100C. Two common calibration methods are the fixed point
method and the comparison method. Fixed point calibration, used for the highest
accuracy calibrations, uses the triple point, freezing point or melting point of pure
substances such as water, zinc, tin, and argon to generate a known and repeatable
temperature. Fixed point calibrations provide extremely accurate calibrations
(within0.001C). A common fixed point calibration method for industrial-grade
probes is the ice bath. The equipment is inexpensive, easy to use, and can
accommodate several sensors at once. The ice point is designated as a secondary
standard because its accuracy is 0.005C (0.009F), compared to 0.001C
(0.0018F) for primary fixed points. Comparison calibrations, commonly used with
industrial RTDs, the thermometers being calibrated are compared to calibrated
thermometers by means of a bath whose temperature is uniformly stable. Unlike fixed
point calibrations, comparisons can be made at any temperature between 100C and
500C (148F to 932F). This method might be more cost-effective since several
sensors can be calibrated simultaneously with automated equipment. These
electrically heated and well-stirred baths use silicone oils and molten salts as the
29
thermal measurement error. The sensing wire is connected to a larger wire, usually
referred to as the element lead or wire. This wire is selected to be compatible with the
sensing wire so that the combination does not generate an emf that would distort the
thermal measurement. These elements work with temperatures to 660C. Coiled
elements have largely replaced wire-wound elements in industry. This design has a
wire coil which can expand freely over temperature, held in place by some
mechanical support which lets the coil keep its shape. This strain free design allows
the sensing wire to expand and contract free of influence from other materials. The
basis of the sensing element is a small coil of platinum sensing wire. This coil
resembles a filament in an incandescent light bulb. The housing or mandrel is a hard
fired ceramic oxide tube with equally spaced bores that run transverse to the axes. The
coil is inserted in the bores of the mandrel and then packed with a very finely ground
ceramic powder. This permits the sensing wire to move while still remaining in good
thermal contact with the process. These Elements works with temperatures to 850 C.
The current international standard which specifies tolerance and the temperature-toelectrical resistance relationship for platinum resistance thermometers is IEC
60751:2008, ASTM E1137 is also used in the United States. By far the most common
devices used in industry have a nominal resistance of 100ohmsat 0 C, and are called
Pt100 sensors ('Pt' is the symbol for platinum, 100 for the resistance in ohm at 0C).
The sensitivity of a standard 100 ohm sensor is a nominal 0.385 ohm/C. RTDs with a
sensitivity of 0.375 and 0.392 ohm/C as well as a variety of others are also available.
2.7
THERMOCOUPLE
31
Fig
2.6
Thermocouple
when choosing a type of thermocouple. Where the measurement point is far from the
measuring instrument, the intermediate connection can be made by extension wires
which are less costly than the materials used to make the sensor. Thermocouples are
usually standardized against a reference temperature of 0 degrees Celsius; practical
instruments use electronic methods of cold-junction compensation to adjust for
varying temperature at the instrument terminals. Electronic instruments can also
compensate for the varying characteristics of the thermocouple, and so improve the
precision and accuracy of measurements. Thermocouples are widely used in science
and industry; applications include temperature measurement gas turbine exhaust,
diesel engines, and other industrial processes.
CHAPTER-3
RESULTS
33
During calibration following results are obtained with K-type thermocouple which
are tabulated in table no 3.1
TABLE 3.1
29
39
59
54
59
The calibration of resistance temperature detector is done and following results are
obtained which are tabulated in table 3.2
TABLE 3.2
Temperature(deg.C) Resistance(ohms)
25
35
45
50
55
100
103.9
107.79
109.73
111.67
CHAPTER-4
CONCLUSION
During the training period in instrumentation section, calibration was done as per the
industrial calibration standards and was properly checked and verified. PLC and
SCADA automation techniques and programming was studied, as per the industrial
requirements. Rockwell automation techniques and new discoveries of SCADA
34
REFERENCES
[1] A.K. Sawhney Electronics measurements and insturuments ( second edition)
vol. 1213
[2] David bailey and Edwin wright SCADA basics, second edition( 2003)
[3] W.BOLTON Programable logic controllers, fourth edition (2006) vol. 229
Links
1) https://www.rockwellautomation.com/rockwellautomation/industries/auto
motive
35
2) http://literature.rockwellautomation.com/idc/groups/literature/documents/a
r/journk-ar010_-en-p.pdf
3) http://www.idconline.com/technical_references/pdfs/electrical_engineering/Control_of_B
oiler_Operation_using_PLC%20-%20SCADA.pdf
36