Plastics Decorating

LATEST ISSUE


VIEW THE ONLINE EDITION

Profile
High-Tech Contract Decorating at Emerald Corporation

Show Preview
2012 SGIA Expo Show Preview

Ask the Expert
Plastics Surface Energy Wetting Test Methods

Technology
Scratch-resistant in One Step

Association
Letter from the Chair
TopCon Rolls Through Indy
Digital Decorating Webinar Scheduled for August 28

Assembly
Ultrasonic Welding: The Need for Speed Control

Management
How to Close Sales that are Over the Budget


CALENDAR

February 12-14
PLASTEC West, Anaheim Convention Center, Anaheim, CA, www.plastecwest.com

March 19
Plastics Crossroads Summit, Sheraton Hotel, Anaheim, CA, www.rjginc.com/plasticscrossroads

March 20-21
PLASTEC South, Orange County Convention Center, Orlando, FL, www.plastecsouth.com

April 8
AWA DecTec USA, Orange County Convention Center, Orlando, FL, www.awa-bv.com

April 22-24
SPE ANTEC® 2013, Duke Energy Convention Center, Cincinnati, OH, www.antec.ws

June 18-20
HBA, June 18-20, Jacob K. Javits Convention Center, New York City, NY, www.hbaexpo.com

June 18-20
PLASTEC East, , Pennsylvania Convention Center, Philadelphia, PA, www.plasteceast.com

 


Omnexus
Source IML IMDA

   
Join Our Email List
Email:  
   
For Email Marketing you can trust

 

Copyright 2010 Peterson Publications, Inc.

Plastics Decorating Magazine 
2150 SW Westport Dr., Suite 101 
Topeka, KS 66614 
(785) 271-5801  Fax (785) 271-6404

 



Trend Analysis to Acquire Knowledge at Increased Volumes
by Perry Parendo, Perry’s Solutions, Inc.

Strategies
October-November2012



When new products are created, the development homework is performed on a small portion of the eventual production. Sometimes it is weeks, or even only days, of equivalent full-volume production levels. If done well, the homework will show performance over a wide range of environments and provide an understanding of key inputs. If not done well, product settings are guessed at and the business decision is made to “clean it up” in production. Either way, an efficient and effective method to allow learning at higher volumes is needed.

What is the homework?
A key element in creating new products is requirements; yet, it is commonly accepted that 70 percent of projects fail due to poor requirements. Translating the customer’s needs and wants into a workable product definition is a vital activity, as is risk management, which is needed to apply priority to the homework activities. Often, false optimism will lead to overlooking risks, jeopardizing the project outcome. Design of Experiments (DOE) can be a critical tool to create new products and reduce the risks of not meeting requirements; but even with the best homework, there still is plenty of learning left to do.

How does DOE work?
DOE is a method that can simultaneously change input variables and extract their individual contributions to an output variable. While it is a mathematical tool, the keys to success lie with the non-math activities. The first step is to determine what needs to be learned. This tends to be a balance between performance objectives and sometimes includes business needs (cost, as one example). Second, the measurements of the results that will indicate progress toward those needs must be determined. Multiple measurements are not uncommon for a real-life situation, though many textbooks will only show one. Next, determine the potential input variables that may influence those outputs. This is balanced with the available budget and schedule. Finally, these items are translated into a predefined matrix to allow extraction of the relationships desired. After the data is collected, a statistical analysis is performed to provide a mathematical model and to determine the learning obtained. Based on the findings, either a recommendation for change is made or further testing is required to obtain deeper learning. Any step may return to an earlier step if there is a contradiction that needs to be resolved.

Trend analysis with good homework
To better understand the challenge, let’s look at an example. A new tool was being developed for a difficult gasket design. The goal was to make the part “flash free” to eliminate secondary processing. This project required multiple iterations as the initial tool could not even fill the part. Using a Design of Experiments approach, desirable settings were obtained in about four weeks, which included a change to the tool. By showing the ability to produce this design, the supplier was able to obtain other contracts from the customer, making a major impact on the organization.

The reduced set of key inputs that was determined from the DOE then could be monitored long-term and trends determined. These input variables could be compared to the equipment capability to understand the ability to hit the requirement window. As special causes of output variability were discovered and eliminated, the key parameters that would need to be confirmed in downstream operations were reduced even further. This limited the effort and yet maximized the opportunity for success – which is the ability to hit the target value with minimum variation.

Trend analysis with weak homework
If the initial characterization work was brief, then the opportunities for learning will show up in greater numbers. Because of this, starting with higher level measurements is important. If too much information is gathered, it will be overwhelming. While it is low hanging fruit, there is not enough time to deal with all of the findings. These learning points will not be recognizable and will be lost as noise. This often will decrease the quality of the information collected as well.


Chart 1.

In implementations I have been involved with, we started with very few trend charts – as few as two or three, based on the defect and rework history. This created plenty of support work and plenty of improvement. Eventually, a point was reached where other “pain points” were visible. A choice had to be made to either add additional charts or to remove the old ones and move on to new ones.


Chart 2.

As an example, a foundry had developed a long history of doing things a certain way. Rework and repair was expected. It took two months of periodically working together to build up the trust necessary for the foundry to give charting a try. Having the rework operator create a manual SPC chart allowed us to quickly address non-conforming product. Within two more months, that repair operator was looking for other work within the company. This was charting only two part numbers within a department that had over 100 employees. Smart focus created massive leverage and important learning.

Standard and common trend charts
In quality circles, these trend charts are referred to as Statistical Process Control (SPC) charts. (See Chart 1 on page 42.) The tools have decades of experience behind them to ensure the intent can be achieved. Some users take a very simplistic view and only look for items which exceed the defined upper control (UCL) and lower control (LCL) limits. This only tells us “we made some bad stuff.” (See Chart 2 above.)

However, the tools have the ability to predict potential future bad stuff. These are called “control signals” or “tests.” There are countless potential tests, and similar tests can have slightly different criteria. The point is to try to create a balance between catching real events as they are taking place and creating a false signal that leads the team on a wild goose chase. The better you understand the history of your process or product, the better you will know the types of trends that could be possible. Selecting a few appropriate signals will make the implementation easier and yet still be useful for learning. (See Chart 3.)

The core trend charts are x-bar and R charts, Individuals charts and p-charts. A detailed description is beyond the scope of this article, but the creation methods, interpretation techniques and limit calculations are available in plenty of resources.

Key elements to trend chart implementation
Chart implementation should be guided by five key elements.

  1. Input measurements onto the chart in real time.
  2. A real-time reaction to any signals generated should take place.
  3. Do not generate more signals than you have resources to react.
  4. Recalculate limits only after changes have been implemented.
  5. Retire charts that are not providing value.
First, the measurement system should feed directly into the chart. If an automated approach is desired, this means the measure goes straight into the software. When manual collection of data happens and is then put into software “at a later time,” responsiveness is hampered and effectiveness is extremely reduced.


Chart 3.

Second, the signals need to be read immediately. A visible reaction to any signal needs to occur. Examples of these responses depend on the signal generated and could include the following:

  • line stop,
  • continue production, but increase inspections or
  • email/text engineering to notify of low signal.
The automated measurement needs to generate the correct response, rather than waiting for someone to notice. The danger in this quick reaction is that many times the bad measurements are incorrect. So, a possible first step is to re- measure. When manual charts are used, the operator needs to be aware of the appropriate reaction based on the signal triggered. The operator then will naturally re-measure before they make calls or shut down the machine. They also will likely ask a few questions before just quickly hitting the off switch.

Third, only generate signals for which you have resources to respond. If a signal is generated and engineering is too busy to respond (because the importance is not high enough), it quickly diminishes the value of all measurements. Suddenly, the quality of measurements and reporting go down, and the motivation to take notice of special situations goes away.

Fourth, any trend chart has a calculated signal level, but an important consideration is when to recalculate. Some automated systems are set up to constantly recalculate, which is not appropriate. The goal is to find shifts from the period of time when the system was closely studied and the performance understood. If significant changes are made to the process, then the calculations can be redone, but this also implies that homework has been done to create a solid understanding about the change. In a manual system, it is not quick or easy to recalculate, so recalculation tends to be done no more than needed.

Finally, a chart needs to be considered for retirement. If it is truly critical, then maybe it can be retained. However, much data is gathered and never used. The costs of measurement, storage and review can be a huge hidden cost within an organization. I have observed situations where 20 minutes per shift was being wasted on useless data collection. Just because it can be collected does not mean it should be. When automated, the cost is hidden even more. A manual chart begs the question “should we still be doing this?”

Benefits of a manual chart
My personal preference is to start with a manual chart. I believe that anything that is automated should be done manually first. It also encourages a simple approach. As with the previous case studies, we can create a minimum of work, yet have a high impact. A few benefits I have found when using a manual chart are as follows:

  • Data integrity. If an operator is putting a “dot” on a chart and it is going to be a “bad” one, they will likely double check it first. In an automated system, the numbers just flow through. In my experience of chasing down problems, a large percentage of “issues” were mis-measurements due to measuring a burr, misreading the gauge or improper methods.
  • Ownership. When the dot is placed on the chart, the operator will be compelled to understand why. The search for a root cause will begin right away. If an alarm sounds, it can be brushed aside or considered an “engineering” problem. The best chance of finding a source happens right at the process – any delay risks the ability to improve.
  • Prioritization. A manual method will force the question: “Why are we doing this?” Only the most important products and most important defects will be worked. This gives importance to the data, meaning that anything collected is acted on.
Conclusion
The best situation is to begin any product launch with a strong base of homework. While we know we will grow and learn, the homework creates a foundation. If the homework was good, a focused SPC effort can be adequate. If the homework was weak, a high-level, simple chart will uncover plenty of low hanging fruit.

Regardless of the level of development work done, a method for feedback and learning is essential to mature a product. Following the advice in this article will help put you on a path to long-term improvement, while also avoiding a false start.

Perry’s Solutions is a consulting company offering new product design, program management and training services, specializing in using Design of Experiments software to improve products and solve problems for medical device companies and other manufacturers. Perry Parendo, president, can be reached via phone at 651.230.3861 or through his website, www.perryssolutions.com.