Case Study

Complex Applicator
Challenge
- Complex actuator with multiple springs and several frictional interfaces and latches defining distinct regions of travel
- Identify a small number of key success criteria to evaluate robustness
- Determine robustness of a single-use applicator for a wearable sensor prior to committing to expensive (time and $) tooling and large-scale builds and test
We use analytical models to evaluate and compare different design concepts on a more or less level playing field. We then continue to develop and exploit the analytical model by creating a parallel empirical model environment, and use that to refine the inputs to the analytical model and converge the two. In this case the analytical model is based on first principles (F=m*a), and we use first principle models for things like beams and snaps. Key characteristics of the actuator are: scotch yoke plus 4 prismatic elements; multiple sliding interfaces; multiple “handoffs”
We use empirically determined values where necessary (for terms like friction), and confirm and reconcile friction and interface forces, and spring forces or moments with empirically determined values. The analytical model can then be used for robustness studies (high n simulations take hours to days, compared to many months and $$$$ for build, test, discovery) and to help us to understand design vulnerabilities. It can also help us to debug anomalous observations by using the model to impose hypothetical inputs to match the observed outputs. This can help us to get insights as to which parts or interfaces to research. We can also over-stress both the analytical and empirical models to find the conditions that make the design fail, and we use this to identify and address design weaknesses.

Approach
- Parallel development of high fidelity analytical and empirical models
- Appropriate force and position sensing built into empirical model
- Refine both systems until there is reasonable agreement and discrepancies are understood
Results
- Verified model enables Monte Carlo Simulation
- Observed behavior – in both analytical and empirical models – gives useful insight for design of device
- Build and “test” thousands of virtual devices in a matter of hours

Key Takeaways
- Empirical and Simulation data in good agreement
- Enabled Monte Carlo simulations which provided insight into robustness at scale – without the impracticality of building & testing 1,000’s of units
- KDPD allowed for the right resources to be focused on the right problems