Publication /lab/correll/ en On the Dual-Use Dilemma in Physical Reasoning and Force /lab/correll/2025/05/24/dual-use-dilemma-physical-reasoning-and-force <span>On the Dual-Use Dilemma in Physical Reasoning and Force</span> <span><span>Nicolaus J Correll</span></span> <span><time datetime="2025-05-24T10:55:34-06:00" title="Saturday, May 24, 2025 - 10:55">Sat, 05/24/2025 - 10:55</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/2025-09/unfettered.jpg?h=aac3dd9c&amp;itok=yDlMbHgT" width="1200" height="800" alt="three pictures showing varying dual-use interactions with a robot."> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-row-subrow row"> <div class="ucb-article-text col-lg d-flex align-items-center" itemprop="articleBody"> <div><p><span>Humans learn how and when to apply forces in the world via a complex physiological and psychological learning process. Attempting to replicate this in vision-language models (VLMs) presents two challenges: VLMs can produce harmful behavior, which is particularly dangerous for VLM-controlled robots which interact with the world, but imposing behavioral safeguards can limit their functional and ethical extents. We conduct two case studies on safeguarding VLMs which generate forceful robotic motion, finding that safeguards reduce both harmful and helpful behavior involving contact-rich manipulation of human body parts. Then, we discuss the key implication of this result--that value alignment may impede desirable robot capabilities--for model evaluation and robot learning.</span></p><p><span><strong>References</strong></span></p><div><a href="https://scholar.google.com/scholar?oi=bibs&amp;cluster=13243736491692235053&amp;btnI=1&amp;hl=en" rel="nofollow">On the Dual-Use Dilemma in Physical Reasoning and Force</a></div><div>W Xie, E Rice, N Correll&nbsp;- arXiv preprint arXiv:2505.18792, 2025</div></div> </div> <div class="ucb-article-content-media ucb-article-content-media-right col-lg"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/2025-09/unfettered.jpg?itok=XObTaIn_" width="1500" height="548" alt="three pictures showing varying dual-use interactions with a robot."> </div> <span class="media-image-caption"> <p class="text-align-center">Varying contextual semantics in the same scene can yield harm and help, often with a thin line separating them. We evaluate how VLMs under different prompt schemes which elicit physical reasoning for robot control navigate this line between harm and help for forceful, contact-rich tasks with potential for bodily danger.</p> </span> </div> </div> </div> </div> </div> </div> </div> </div> <div>Humans learn how and when to apply forces in the world via a complex physiological and psychological learning process. Attempting to replicate this in vision-language models (VLMs) presents two challenges: VLMs can produce harmful behavior, which is particularly dangerous for VLM-controlled robots which interact with the world, but imposing behavioral safeguards can limit their functional and ethical extents. We conduct two case studies on safeguarding VLMs which generate forceful robotic motion, finding that safeguards reduce both harmful and helpful behavior involving contact-rich manipulation of human body parts. Then, we discuss the key implication of this result--that value alignment may impede desirable robot capabilities--for model evaluation and robot learning.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Sat, 24 May 2025 16:55:34 +0000 Nicolaus J Correll 168 at /lab/correll Unfettered Forceful Skill Acquisition with Physical Reasoning and Coordinate Frame Labeling /lab/correll/2025/05/14/unfettered-forceful-skill-acquisition-physical-reasoning-and-coordinate-frame-labeling <span>Unfettered Forceful Skill Acquisition with Physical Reasoning and Coordinate Frame Labeling</span> <span><span>Nicolaus J Correll</span></span> <span><time datetime="2025-05-14T11:03:34-06:00" title="Wednesday, May 14, 2025 - 11:03">Wed, 05/14/2025 - 11:03</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/2025-09/deligrasp2.jpg?h=0a412f13&amp;itok=1ZJTcUg3" width="1200" height="800" alt="A system diagram of how coordinate systems are annotated and resulting torques are sent to the humanoid."> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/2025-09/deligrasp2.jpg?itok=JnQldILu" width="1500" height="641" alt="A system diagram of how coordinate systems are annotated and resulting torques are sent to the humanoid."> </div> <span class="media-image-caption"> <p class="text-align-center">A natural language query, together with head and wrist images both annotated with a coordinate frame at a VLM-generated grasp point (u, v) on the image, is provided to Gemini to estimate, using spatial and physical reasoning, an appropriate wrench and duration to execute the task. The wrench is then passed to a compliance controller and the resulting motion and visual data can be used for iterative task improvement.</p> </span> </div> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><p><span>Vision language models (VLMs) exhibit vast knowledge of the physical world, including intuition of physical and spatial properties, affordances, and motion. With fine-tuning, VLMs can also natively produce robot trajectories. We demonstrate that eliciting wrenches, not trajectories, allows VLMs to explicitly reason about forces and leads to zero-shot generalization in a series of manipulation tasks without pretraining. We achieve this by overlaying a consistent visual representation of relevant coordinate frames on robot-attached camera images to augment our query. First, we show how this addition enables a versatile motion control framework evaluated across four tasks (opening and closing a lid, pushing a cup or chair) spanning prismatic and rotational motion, an order of force and position magnitude, different camera perspectives, annotation schemes, and two robot platforms over 220 experiments, resulting in 51% success across the four tasks. Then, we demonstrate that the proposed framework enables VLMs to continually reason about interaction feedback to recover from task failure or incompletion, with and without human supervision. Finally, we observe that prompting schemes with visual annotation and embodied reasoning can bypass VLM safeguards. We characterize prompt component contribution to harmful behavior elicitation and discuss its implications for developing embodied reasoning. Our code, videos, and data are available at: </span><a href="https://scalingforce.github.io/." rel="nofollow"><span>https://scalingforce.github.io/.</span></a></p><p><span><strong>References</strong></span></p><div><a href="https://scholar.google.com/scholar?oi=bibs&amp;cluster=3261464556574812859&amp;btnI=1&amp;hl=en" rel="nofollow">Unfettered Forceful Skill Acquisition with Physical Reasoning and Coordinate Frame Labeling</a></div><div>W Xie, M Conway, Y Zhang, N Correll&nbsp;- arXiv preprint arXiv:2505.09731, 2025.</div></div> </div> </div> </div> </div> <div>Vision language models (VLMs) exhibit vast knowledge of the physical world, including intuition of physical and spatial properties, affordances, and motion. With fine-tuning, VLMs can also natively produce robot trajectories. We demonstrate that eliciting wrenches, not trajectories, allows VLMs to explicitly reason about forces and leads to zero-shot generalization in a series of manipulation tasks without pretraining.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 14 May 2025 17:03:34 +0000 Nicolaus J Correll 169 at /lab/correll Towards forceful robotic foundation models: a literature survey /lab/correll/2025/04/16/towards-forceful-robotic-foundation-models-literature-survey <span>Towards forceful robotic foundation models: a literature survey</span> <span><span>Nicolaus J Correll</span></span> <span><time datetime="2025-04-16T11:12:35-06:00" title="Wednesday, April 16, 2025 - 11:12">Wed, 04/16/2025 - 11:12</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/2025-09/forcesensors.jpg?h=b710deb4&amp;itok=ZisGtr-t" width="1200" height="800" alt="Various ways for measuring force and torque on a robotic arm. "> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-row-subrow row"> <div class="ucb-article-text col-lg d-flex align-items-center" itemprop="articleBody"> <div><p><span>This article reviews contemporary methods for integrating force, including both proprioception and tactile sensing, in robot manipulation policy learning. We conduct a comparative analysis on various approaches for sensing force, data collection, behavior cloning, tactile representation learning, and low-level robot control. From our analysis, we articulate when and why forces are needed, and highlight opportunities to improve learning of contact-rich, generalist robot policies on the path toward highly capable touch-based robot foundation models. We generally find that while there are few tasks such as pouring, peg-in-hole insertion, and handling delicate objects, the performance of imitation learning models is not at a level of dynamics where force truly matters. Also, force and touch are abstract quantities that can be inferred through a wide range of modalities and are often measured and controlled implicitly. We hope that juxtaposing the different approaches currently in use will help the reader to gain a systemic understanding and help inspire the next generation of robot foundation models.</span></p></div> </div> <div class="ucb-article-content-media ucb-article-content-media-right col-lg"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/2025-09/forcesensors.jpg?itok=QMpap1O2" width="1500" height="977" alt="Various ways for measuring force and torque on a robotic arm. "> </div> <span class="media-image-caption"> <p>Common force or touch sensing methods on robot arms include joint torques, wrist force/torque (F/T) sensors, and end effector gripper sensors.&nbsp;</p> </span> </div> </div> </div> </div> </div> </div> </div> </div> <div>This article reviews contemporary methods for integrating force, including both proprioception and tactile sensing, in robot manipulation policy learning. We conduct a comparative analysis on various approaches for sensing force, data collection, behavior cloning, tactile representation learning, and low-level robot control.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 16 Apr 2025 17:12:35 +0000 Nicolaus J Correll 170 at /lab/correll A Machine Learning Approach to Contact Localization in Variable Density Three-Dimensional Tactile Artificial Skin /lab/correll/2025/06/03/machine-learning-approach-contact-localization-variable-density-three-dimensional <span>A Machine Learning Approach to Contact Localization in Variable Density Three-Dimensional Tactile Artificial Skin</span> <span><span>Nicolaus J Correll</span></span> <span><time datetime="2024-12-04T14:19:38-07:00" title="Wednesday, December 4, 2024 - 14:19">Wed, 12/04/2024 - 14:19</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/2025-06/kohlbrenner.png?h=07483371&amp;itok=RJF4UptK" width="1200" height="800" alt="Contact localization model takes in a sensor image from any configuration of artificial tactile skin and determines the location of touch through a feedforward neural network."> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-row-subrow row"> <div class="ucb-article-text col-lg d-flex align-items-center" itemprop="articleBody"> <div><p>Estimating the location of contact is a primary function of artificial tactile sensing apparatuses that perceive the environment through touch. Existing contact<br>localization methods use flat geometry and uniform sensor distributions as a simplifying assumption, limiting their ability to be used on 3D surfaces with variable<br>density sensing arrays. This paper studies contact localization on an artificial skin<br>embedded with mutual capacitance tactile sensors, arranged non-uniformly in an<br>unknown distribution along a semi-conical 3D geometry. A fully connected neural<br>network is trained to localize the touching points on the embedded tactile sensors.<br>The studied online model achieves a localization error of 5.7 ± 3.0 mm. This<br>research contributes a versatile tool and robust solution for contact localization that<br>is ambiguous in shape and internal sensor distribution.</p><p><strong>References</strong></p><p><span>Murray, M., Zhang, Y., Kohlbrenner, C., Escobedo, C., Dunnington, T., Stevenson, N., Correll, N. and Roncone, A., </span><a href="https://openreview.net/pdf?id=sbkPfK5f20" rel="nofollow"><span>A Machine Learning Approach to Contact Localization in Variable Density Three-Dimensional Tactile Artificial Skin</span></a><span>. In </span><em>2nd NeurIPS Workshop on Touch Processing: From Data to Knowledge</em><span>.</span></p></div> </div> <div class="ucb-article-content-media ucb-article-content-media-right col-lg"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/2025-06/kohlbrenner.png?itok=jYWdsHdz" width="1500" height="1097" alt="Contact localization model takes in a sensor image from any configuration of artificial tactile skin and determines the location of touch through a feedforward neural network."> </div> <span class="media-image-caption"> <p class="text-align-center">Contact localization model takes in a sensor image from any configuration of artificial tactile skin and determines the location of touch through a feedforward neural network.</p> </span> </div> </div> </div> </div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 04 Dec 2024 21:19:38 +0000 Nicolaus J Correll 167 at /lab/correll Just Add Force for Delicate Robot Policies /lab/correll/2024/10/18/just-add-force-delicate-robot-policies <span>Just Add Force for Delicate Robot Policies</span> <span><span>Nicolaus J Correll</span></span> <span><time datetime="2024-10-18T15:39:48-06:00" title="Friday, October 18, 2024 - 15:39">Fri, 10/18/2024 - 15:39</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/2024-10/highlevel.png?h=85555ed0&amp;itok=XjxgIjmZ" width="1200" height="800" alt="Overview over &quot;Just add Force&quot; "> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/2024-10/highlevel.png?itok=J_9kdxtL" width="1500" height="327" alt="Overview over &quot;Just add Force&quot; "> </div> </div> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><p><span>Robot trajectories used for learning end-to-end robot policies typically contain end-effector and gripper position, workspace images, and language. Policies learned from such trajectories are unsuitable for delicate grasping, which require tightly coupled and precise gripper force and gripper position. We collect and make publically available 130 trajectories with force feedback of successful grasps on 30 unique objects. Our current-based method for sensing force, albeit noisy, is gripper-agnostic and requires no additional hardware. We train and evaluate two diffusion policies: one with (forceful) the collected force feedback and one without (position-only). We find that forceful policies are superior to position-only policies for delicate grasping and are able to generalize to unseen delicate objects, while reducing grasp policy latency by near 4x, relative to LLM-based methods. With our promising results on limited data, we hope to signal to others to consider investing in collecting force and other such tactile information in new datasets, enabling more robust, contact-rich manipulation in future robot foundation models.</span></p><p><span><strong>Reference</strong></span></p><p>W. Xie, S. Caldararu, N. Correll. Just Add Force for Delicate Robot Policies. <a href="https://arxiv.org/abs/2410.13124" rel="nofollow">https://arxiv.org/abs/2410.13124</a>, 2024.&nbsp;</p><p><br>&nbsp;</p></div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 18 Oct 2024 21:39:48 +0000 Nicolaus J Correll 162 at /lab/correll DeliGrasp: Inferring Object Mass, Friction, and Compliance with LLMs for Adaptive and Minimally Deforming Grasp Policies /lab/correll/2024/03/12/deligrasp-inferring-object-mass-friction-and-compliance-llms-adaptive-and-minimally <span>DeliGrasp: Inferring Object Mass, Friction, and Compliance with LLMs for Adaptive and Minimally Deforming Grasp Policies</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2024-03-12T00:00:00-06:00" title="Tuesday, March 12, 2024 - 00:00">Tue, 03/12/2024 - 00:00</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/article-thumbnail/deligrasp.png?h=fe0a7276&amp;itok=3cQWAxBt" width="1200" height="800" alt="Large language models (LLMs) have rich physical knowledge about worldly objects, but cannot directly reason robot grasps for them. Paired with open-world localization and pose estimation (left), our method (middle), queries LLMs for the salient physical characteristics of mass, friction, and compliance as the basis for an adaptive grasp controller. DeliGrasp policies successfully grasp delicate and deformable objects "> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default 3"> <div class="ucb-article-row-subrow row"> <div class="ucb-article-text col-lg d-flex align-items-center" itemprop="articleBody"> <div><p>Large language models (LLMs) can provide rich physical descriptions of most worldly objects, allowing robots to achieve more informed and capable grasping. We leverage LLMs’ common sense physical reasoning and code-writing abilities to infer an object’s physical characteristics—mass, friction coefficient, and spring constant&nbsp;—from a semantic description, and then translate those characteristics into an executable adaptive grasp policy. Using a current-controllable, two-finger gripper with a built-in depth camera, we demonstrate that LLM-generated, physically-grounded grasp policies outperform traditional grasp policies on a custom benchmark of 12 delicate and deformable items including food, produce, toys, and other everyday items, spanning two orders of magnitude in mass and required pick-up force. We also demonstrate how compliance feedback from&nbsp;DeliGrasp&nbsp;policies can aid in downstream tasks such as measuring produce ripeness. Our code and videos are available at:&nbsp;<a href="https://deligrasp.github.io/" rel="nofollow">https://deligrasp.github.io</a>.</p><p><strong>References</strong></p><p><span>Xie, W., Valentini, M., Lavering, J. and Correll, N., 2024. DeliGrasp: Inferring Object Properties with LLMs for Adaptive Grasp Policies. In </span><em>8th Annual Conference on Robot Learning</em><span>.</span></p><p>Xie, W., Lavering, J. and Correll, N., 2024. DeliGrasp: Inferring Object Mass, Friction, and Compliance with LLMs for Adaptive and Minimally Deforming Grasp Policies.&nbsp;<a href="https://arxiv.org/pdf/2403.07832v1.pdf" rel="nofollow"><em>arXiv preprint arXiv:2403.07832</em></a>.</p></div> </div> <div class="ucb-article-content-media ucb-article-content-media-right col-lg"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/article-image/deligrasp.png?itok=fxPRIca6" width="1500" height="2194" alt="Large language models (LLMs) have rich physical knowledge about worldly objects, but cannot directly reason robot grasps for them. Paired with open-world localization and pose estimation (left), our method (middle), queries LLMs for the salient physical characteristics of mass, friction, and compliance as the basis for an adaptive grasp controller. DeliGrasp policies successfully grasp delicate and deformable objects "> </div> </div> </div> </div> </div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Tue, 12 Mar 2024 06:00:00 +0000 Anonymous 134 at /lab/correll A multifunctional soft robotic shape display with high-speed actuation, sensing, and control /lab/correll/2023/07/31/multifunctional-soft-robotic-shape-display-high-speed-actuation-sensing-and-control <span>A multifunctional soft robotic shape display with high-speed actuation, sensing, and control</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2023-07-31T00:00:00-06:00" title="Monday, July 31, 2023 - 00:00">Mon, 07/31/2023 - 00:00</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/article-thumbnail/41467_2023_39842_Fig1_HTML.png?h=f55021f1&amp;itok=oBt_HlOc" width="1200" height="800" alt="A multifunctional soft robotic shape display with high-speed actuation, sensing, and control"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/article-image/41467_2023_39842_Fig1_HTML.png?itok=-j7W5njh" width="1500" height="1152" alt="A multifunctional soft robotic shape display with high-speed actuation, sensing, and control"> </div> </div> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><p>Shape displays that actively manipulate surface geometry are an expanding robotics domain with applications to haptics, manufacturing, aerodynamics, and more. However, existing displays often lack high-fidelity shape morphing, high-speed deformation, and embedded state sensing, limiting their potential uses. Here, we demonstrate a multifunctional soft-shape display driven by a 10 × 10 array of scalable cellular units which combine high-speed electrohydraulic soft actuation, magnetic-based sensing, and control circuitry. We report high-performance reversible shape morphing up to 50 Hz, sensing of surface deformations with 0.1 mm sensitivity, and external forces with 50 mN sensitivity in each cell, which we demonstrate across a multitude of applications including user interaction, image display, sensing of object mass, and dynamic manipulation of solids and liquids. This work showcases the rich multifunctionality and high-performance capabilities that arise from tightly-integrating large numbers of electrohydraulic actuators, soft sensors, and controllers at a previously undemonstrated scale in soft robotics.</p> <p><strong>References</strong></p> <p>B. K. Johnson, M. Naris, V. Sundaram, A. Volchko, K. Ly, S. K. Mitchell, E. Acome, N. Kellaris, C. Keplinger, N. Correll, J. S. Humbert &amp; M. E. Rentschler. <a href="https://www.nature.com/articles/s41467-023-39842-2.pdf" rel="nofollow">A multifunctional soft robotic shape display with high-speed actuation, sensing, and control</a>. <em>Nature Communications</em> volume 14, Article number: 4516 (2023)</p></div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 31 Jul 2023 06:00:00 +0000 Anonymous 123 at /lab/correll A versatile robotic hand with 3D perception, force sensing for autonomous manipulation /lab/correll/2023/07/10/versatile-robotic-hand-3d-perception-force-sensing-autonomous-manipulation <span>A versatile robotic hand with 3D perception, force sensing for autonomous manipulation</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2023-07-10T12:42:28-06:00" title="Monday, July 10, 2023 - 12:42">Mon, 07/10/2023 - 12:42</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/article-thumbnail/hand%20%281%29.png?h=fba20465&amp;itok=2sR5YRPT" width="1200" height="800" alt="The Versand"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/article-image/hand%20%281%29.png?itok=S7Ykx5_E" width="1500" height="1102" alt="The Versand"> </div> </div> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><p>We describe a force-controlled robotic gripper with built-in tactile and 3D perception. We also describe a complete autonomous manipulation pipeline consisting of object detection, segmentation, point cloud processing, force-controlled manipulation, and symbolic (re)-planning. The design emphasizes versatility in terms of applications, manufacturability, use of commercial off-the-shelf parts, and open-source software. We validate the design by characterizing force control (achieving up to 32N, controllable in steps of 0.08N), force measurement, and two manipulation demonstrations: assembly of the Siemens gear assembly problem, and a sensor-based stacking task requiring replanning. These demonstrate robust execution of long sequences of sensor-based manipulation tasks, which makes the resulting platform a solid foundation for researchers in task-and-motion planning, educators, and quick prototyping of household and warehouse automation tasks. &nbsp;</p> <p><strong>References</strong></p> <p>N. Correll, D. Kriegman, S. Otte and J. Watson. A versatile robotic hand with 3D perception, force sensing for autonomous manipulation. In Proceedings of Workshop on&nbsp;<a href="http://rss23.armbench.com/" rel="nofollow">Perception and Manipulation Challenges for Warehouse Automation</a>, Robotics: Science and Systems, Daegu, Korea.</p> <p>&nbsp;</p></div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 10 Jul 2023 18:42:28 +0000 Anonymous 64 at /lab/correll Early failure prediction during robotic assembly using Transformers /lab/correll/2023/07/10/early-failure-prediction-during-robotic-assembly-using-transformers <span>Early failure prediction during robotic assembly using Transformers</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2023-07-10T00:00:00-06:00" title="Monday, July 10, 2023 - 00:00">Mon, 07/10/2023 - 00:00</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/article-thumbnail/transformer_diagram%20%281%29_2.png?h=0401acf6&amp;itok=9MW1MiCx" width="1200" height="800" alt="Transformer architecture"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/article-image/transformer_diagram%20%281%29.png?itok=TUdG_zlW" width="1500" height="3704" alt="Transformer architecture"> </div> </div> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><p>Peg-in-hole assembly of tightly fitting parts often requires multiple attempts. Parts need to be put together by performing a wiggling motion of undetermined length and can get stuck, requiring a restart. Recognizing unsuccessful insertion attempts early can help in reducing the <em>makespan</em>&nbsp;of the assembly. This can be achieved by analyzing time-series data from force and torque measurements. We describe a transformer neural network model that is three times faster, i.e. requiring much shorter time series, for predicting failure than a dilated fully convolutional neural network. Albeit the transformer provides predictions with higher confidence, it does so at reduced accuracy. Yet, being able to call unsuccessful attempts early, makespan can be reduced by almost 40% which we show using a dataset with force-torque data from 241 peg-in-hole assembly runs with known outcomes.&nbsp;</p> <p><strong>References</strong></p> <p>R. Montané-Güell, J. Watson and N. Correll, 2023.&nbsp;Early failure prediction during robotic assembly using Transformers. In Proceedings of Workshop on&nbsp;<a href="https://sites.google.com/nvidia.com/industrial-assembly" rel="nofollow">Robotics and AI: The Future of Industrial Assembly Tasks</a>&nbsp;at Robotics: Science and Systems, Daegu, Korea.&nbsp;</p></div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 10 Jul 2023 06:00:00 +0000 Anonymous 65 at /lab/correll Distributed Tactile Sensors for Palmar Surfaces of Prosthetic Hands /lab/correll/2023/05/19/distributed-tactile-sensors-palmar-surfaces-prosthetic-hands <span>Distributed Tactile Sensors for Palmar Surfaces of Prosthetic Hands</span> <span><span>Anonymous (not verified)</span></span> <span><time datetime="2023-05-19T00:00:00-06:00" title="Friday, May 19, 2023 - 00:00">Fri, 05/19/2023 - 00:00</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/focal_image_wide/public/article-thumbnail/palmarsensing.png?h=c1bec7e2&amp;itok=q5LW64uw" width="1200" height="800" alt="Different grasps that require palmar sensing"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/lab/correll/taxonomy/term/12"> Publication </a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-content-media ucb-article-content-media-above"> <div> <div class="paragraph paragraph--type--media paragraph--view-mode--default"> <div> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/lab/correll/sites/default/files/styles/large_image_style/public/article-image/palmarsensing.png?itok=syz3MqD7" width="1500" height="413" alt="Different grasps that require palmar sensing"> </div> </div> </div> </div> </div> <div class="ucb-article-text d-flex align-items-center" itemprop="articleBody"> <div><p>Sensory feedback provided by prosthetic hands shows promise in increasing functional abilities and promoting embodiment of the prosthetic device. However, sensory feedback is limited based on where sensors are placed on the prosthetic device and has mainly focused on sensorizing the fingertips. Here we describe distributed tactile sensors for the palmar surfaces of prosthetic hands. We believe a sensing system that can detect interactions across the palmar surfaces in addition to the fingertips will further improve the experience for the prosthetic user and may increase embodiment of the device as well. This work details the design of a compliant distributed sensor which consists of PiezoResistive and PiezoElectric layers to produce a robust force measurement of both static and dynamic loads. This assembled sensor system is easy to customize to cover different areas of the prosthetic hand, simple to scale up, and flexible to different fabrication form-factors. The experimental results detail a load estimation accuracy of 95.4% and sensor response time of less than 200ms. Cycle tests of each sensor shows a drifting of within 10% of sensing capability under load and 6.37% in a no-load longitudinal test. These validation experiments reinforce the ability of the DualPiezo structure to provide a valuable sensor design for the palmar surfaces of prosthetic hands.</p> <p><strong>References</strong></p> <p>Truong, H., Correll, N. and Segil, J., 2023, April. Distributed Tactile Sensors for Palmar Surfaces of Prosthetic Hands. In&nbsp;<i>2023 11th International IEEE/EMBS Conference on Neural Engineering (NER)</i>&nbsp;(pp. 1-4). IEEE. [<a href="https://ieeexplore.ieee.org/abstract/document/10123819" rel="nofollow">PDF</a>]</p></div> </div> </div> </div> </div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 19 May 2023 06:00:00 +0000 Anonymous 63 at /lab/correll