In an earlier post “Ready, Fire, Aim“, I described a scenario that’s probably all too common with product and process development.
Our team was charged with improving and validating manufacturing processes associated with a particular product. We were on this path and identified a couple areas for improvement. At the same time, the product has had several issues relating to performance. The goal was for our team to determine root causes and resolve the problems.
The second group had their minds made up. They knew that if one piece of equipment was changed and the associated process was properly tweaked, the performance issues would go away. During this past week, the second group led the charge and changes to be made. Our original team was tasked with the testing and verification activities to prove the tweaks did in fact fix the problems. One minor issue. The testing proved the issue remains. The process thought to be the root cause is in fact NOT. The outside group that came in to save the day has left our project team to pick up the pieces.
When this device was developed, the testing showed there was no issue (I question this testing and the results). But management stated their data was accepted and demonstrated the issue was not present when the product was originally released. So now, our team gets the dubious honor of testing with a robust method that will in fact provide evidence that the issues are still present and not resolved with the process “improvements”. We are fairly certain that the results we are about to gather will show the product is out of specification.
Did we hit the target?