26th Southern Biomedical Engineering Conference SBEC 2010 April 30 - May 2, 2010 College Park, Maryland, USA - PDF Free Download (2024)

IFMBE Proceedings Series Editor: R. Magjarevic

Volume 32

The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 60 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Herbert Voigt, Vice-President: Ratko Magjarevic, Past-President: Makoto Kikuchi Treasurer: Shankar M. Krishnan, Secretary-General: James Goh http://www.ifmbe.org

Previous Editions: IFMBE Proceedings SBEC 2010, “26th Southern Biomedical Engineering Conference SBEC 2010 April 30 – May 2, 2010 College Park, Maryland, USA”, Vol. 32, 2010, Maryland, USA, CD IFMBE Proceedings WCB 2010, “6th World Congress of Biomechanics (WCB 2010)”, Vol. 31, 2010, Singapore, CD IFMBE Proceedings BIOMAG2010, “17th International Conference on Biomagnetism Advances in Biomagnetism – Biomag2010”, Vol. 28, 2010, Dubrovnik, Croatia, CD IFMBE Proceedings ICDBME 2010, “The Third International Conference on the Development of Biomedical Engineering in Vietnam”, Vol. 27, 2010, Ho Chi Minh City, Vietnam, CD IFMBE Proceedings MEDITECH 2009, “International Conference on Advancements of Medicine and Health Care through Technology”, Vol. 26, 2009, Cluj-Napoca, Romania, CD IFMBE Proceedings WC 2009, “World Congress on Medical Physics and Biomedical Engineering”, Vol. 25, 2009, Munich, Germany, CD IFMBE Proceedings SBEC 2009, “25th Southern Biomedical Engineering Conference 2009”, Vol. 24, 2009, Miami, FL, USA, CD IFMBE Proceedings ICBME 2008, “13th International Conference on Biomedical Engineering” Vol. 23, 2008, Singapore, CD IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD IFMBE Proceedings ICEBI 2007 “13th International Conference on Electrical Bioimpedance and the 8th Conference on Electrical Impedance Tomography”, Vol. 17, 2007, Graz, Austria, CD IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD

IFMBE Proceedings Vol. 32

Keith E. Herold, William E. Bentley, and Jafar Vossoughi (Eds.)

26th Southern Biomedical Engineering Conference SBEC 2010 April 30 – May 2, 2010 College Park, Maryland, USA

123

Editors Jafar Vossoughi Biomed Research Foundation 20832 Olney, USA Email: [emailprotected]

Keith E. Herold Ph.D. University of Maryland Dept. Bioengineering Glenn L. Martin Hall 2181 20742 College Park Maryland USA E-mail: [emailprotected] William E. Bentley University of Maryland Fischell Dept. of Bioengineering 20742 College Park USA Email: [emailprotected]

ISSN 1680-0737 ISBN 978-3-642-14997-9

e-ISBN 978-3-642-14998-6

DOI 10.1007/978-3-642-14998-6 Library of Congress Control Number: 2010932014 © International Federation for Medical and Biological Engineering 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permissions for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The IFMBE Proceedings is an Official Publication of the International Federation for Medical and Biological Engineering (IFMBE) Typesetting: Scientific Publishing Services Pvt. Ltd., Chennai, India. Cover Design: deblik, Berlin Printed on acid-free paper 987654321 springer.com

This page intentionally left blank

Preface

The 26th Southern Biomedical Engineering Conference was hosted by the Fischell Department of Bioengineering and the A. James Clark School of Engineering from April 30–May 2, 2010. The conference program consisted of 168 oral presentations and 21 poster presentations with approximately 250 registered participants of which about half were students. The sessions were designed along topical lines with student papers mixed in randomly with more senior investigators. There was a Student Competition resulting in several Best Paper and Honorable Mention awards. There were 32 technical sessions occurring in 6–7 parallel sessions. This Proceedings is a subset of the papers submitted to the conference. It includes 147 papers organized in topical areas. Many thanks go out to the paper reviewers who significantly improved the clarity of the submitted papers. We greatly appreciate the opportunity to team with IFMBE – International Federation for Medical and Biological Engineers who endorsed the conference and made this Proceedings possible through their relationship with Springer. In addition, the endorsem*nt by BMES – Biomedical Engineering Society provided excellent visibility for the SBEC through listings on the BMES website. The National Cancer Institute (NCI) of the U.S. National Institutes of Health sponsored Session 33 which was a special Memorial Session for a long time NCI program manager, Dr. James W. Jacobson, who died recently. The special session topic was Technologies for Cancer Diagnostics and four of the papers from that session are included in this Proceedings. NCI’s support is gratefully acknowledged. This session was made possible by Dr. Avraham Rasooly at NCI who organized and promoted this session and who was responsible for the best concentration of science at the conference. Finally, special thanks goes to NSF – National Science Foundation for their generous support over many years for this conference series. NSF’s support allows the organizers to subsidize student participation, market the conference to a broader range of potential participants, and to achieve a higher overall educational value as a result. We hope that this permanent record of the conference will be a useful tool for researchers in the broad field of biomedical engineering. SBEC Conference Co-chairs Keith Herold William Bentley Jafar Vossoughi

Table of Contents

Traumatic Brain Injury Traumatic Brain Injury in Rats Caused by Blast-Induced Hyper-Acceleration . . . . . . . . . . . . . . . . G. Fiskum, J. Hazelton, R. Gullapalli, W.L. Fourney

1

Early Metabolic and Structural Changes in the Rat Brain Following Trauma in vivo Using MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Xu, J. Zhuo, J. Racz, S. Roys, D. Shi, G. Fiskum, R. Gullapalli

5

Principal Components of Brain Deformation in Response to Skull Acceleration: The Roles of Sliding and Tethering between the Brain and Skull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teresa M. Abney, Y. Aaron Feng, Robert Pless, Ruth J. Okamoto, Guy M. Genin, Philip V. Bayly

9

Investigations into Wave Propagation in Soft Tissue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.F. Valdez, B. Balachandran

13

Correlating Tissue Response with Anatomical Location of mTBI Using a Human Head Finite Element Model under Simulated Blast Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T.P. Harrigan, J.C. Roberts, E.E. Ward, A.C. Merkle

18

Human Surrogate Head Response to Dynamic Overpressure Loading in Protected and Unprotected Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.C. Merkle, I.D. Wing, J.C. Roberts

22

Blast-Induced Traumatic Brain Injury: Using a Shock Tube to Recreate a Battlefield Injury in the Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.B. Long, L. Tong, R.A. Bauman, J.L. Atkins, A.J. Januszkiewicz, C. Riccio, R. Gharavi, R. Shoge, S. Parks, D.V. Ritzel, T.B. Bentley

26

Wave Propagation in the Human Brain and Skull Imaged in vivo by MR Elastography . . . . . . . E.H. Clayton, G.M. Genin, P.V. Bayly

31

Cavitation as a Possible Traumatic Brain Injury (TBI) Damage Mechanism . . . . . . . . . . . . . . . . . . Andrew Wardlaw, Jack Goeller

34

Prognostic Ability of Diffusion Tensor Imaging Parameters among Severely Injured Traumatic Brain Injury Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joshua F. Betz, Jiachen Zhuo, Anindya Roy, Kathirkamanthan Shanmuganathan, Rao P. Gullapalli

38

Auditory Science Hair Cell Regeneration in the Mammalian Ear, Is Gene Therapy the Answer? . . . . . . . . . . . . . . . . Matthew W. Kelley

42

Magnetoencephalography and Auditory Neural Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.Z. Simon, N. Ding

45

VIII

Table of Contents

Voice Pitch Processing with Cochlear Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monita Chatterjee, Shu-Chen Peng, Lauren Wawroski, Cherish Oberzut

49

Transcranial Magnetic Stimulation as a Tool for Investigating and Treating Tinnitus . . . . . . . . . G.F. Wittenberg

53

Bioengineering Education A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen . . . . . . . . . . W.C. Wong, E.B. Haase

56

Classroom Nuclear Magnetic Resonance System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.L. Zimmerman, E.S. Boyden, S.C. Wasserman

61

The Basics of Bioengineering Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arthur T. Johnson

65

HealthiManage: An Individualized Prediction Algorithm for Type 2 Diabetes Chronic Disease Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Salim Chemlal, Sheri Colberg, Marta Satin-Smith, Eric Gyuricsko, Tom Hubbard, Mark W. Scerbo, Frederic D. McKenzie

67

Cellular Engineering Dynamic Movement and Property Changes in Live Mesangial Cells by Stimuli . . . . . . . . . . . . . . . Gi Ja Lee, Samjin Choi, Jeong Hoon Park, Kyung Sook Kim, Ilsung Cho, Sang Ho Lee, Hun Kuk Park Cooperative Interactions between Myosin II and Cortexillin I Mediated by Actin Filaments during Cellular Deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tianzhi Luo, Douglas N. Robinson

71

74

Devices Constitutive Law for Miniaturized Quantitative Microdialysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.-f. Chen

77

Non-invasive Estimation of Intracranial Pressure by Means of Retinal Venous Pulsatility . . . . . S. Mojtaba Golzan, Stuart L. Graham, Alberto Avolio

81

Apparatus for Quantitative Slit-Lamp Ocular Fluorometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jos´e P. Domingues, Isa Branco, Ant´ onio M. Morgado

85

Changes in Viscoelastic Properties of Latex Condoms Due to Personal Lubricants . . . . . . . . . . . . Srilekha Sarkar Das, Matthew Schwerin, Donna Walsh, Charles Tack, D. Coleman Richardson

89

Towards the Objective Evaluation of Hand Disinfection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Akos Lehotsky, Melinda Nagy, Tam´ as Haidegger

92

Table of Contents

IX

Neural Systems Engineering In vitro Models for Measuring Charge Storage Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.F. Zaidi, Z.H. Benchekroun, S. Minnikanti, J. Pancrazio, N. Peixoto Discovery of Long-Latency Somatosensory Evoked Potentials as a Marker of Cardiac Arrest Induced Brain Injury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dan Wu, Jai Madhok, Young-Seok Choi, Xiaofeng Jia, Nitish V. Thakor In vivo Characterization of Epileptic Tissue with Time-Dependent, Diffuse Reflectance Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nitin Yadav, Sanghoon Oh, Sanjeev Bhatia, John Ragheb, Prasanna Jayakar, Michael Duchowny, Wei-Chiang Lin

97

101

105

Kinematics Effects of Initial Grasping Forces, Axes, and Directions on Torque Production during Circular Object Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Huang, J.K. Shim

109

Time Independent Functional Training of Inter-joint Arm Coordination Using the ARMin III Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.B. Brokaw, T. Nef, T.M. Murray, P.S. Lum

113

Kinematic Analysis in Robot Assisted Femur Fracture Reduction: Fuzzy Logic Approach . . . . . Wang Song, Chen Yonghua, Ye Ruihua, Yau WaiPan

118

Compensation for Weak Hip Abductors in Gait Assisted by a Novel Crutch-Like Device . . . . . . J.R. Borrelli, H.W. Haslach Jr.

122

Nanotechnology Measuring in vivo Effects of Chemotherapy Treatment on Cardiac Capillary Permeability . . . . A. Fernandez-Fernandez, D.A. Carvajal, A.J. McGoron

126

Nanoscale “DNA Baskets” for the Delivery of siRNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.C. Zirzow, M. Skoblov, A. Patanarut, C. Smith, A. Fisher, V. Chandhoke, A. Baranova

130

Nanoscale Glutathione Patches Improve Organ Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homer Nazeran, Sherry Blake-Greenberg

134

Nanoscale Carnosine Patches Improve Organ Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homer Nazeran, Sherry Blake-Greenberg

138

Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.L. Van Druff, W. Zhou, E. Asman, J.B. Leach

142

Ion-Mobility Characterization of Functionalized and Aggregated Gold Nanoparticles for Drug Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.-H. Tsai, L.F. Pease III, R.A. Zangmeister, S. Guha, M.J. Tarlov, M.R. Zachariah

146

X

Table of Contents

Implants Quantitative Mapping of Vascular Geometry for Implant Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.W. Karanian, O. Lopez, D. Rad, B. McDowell, M. Kreitz, J. Esparza, J. Vossoughi, O.A. Chiesa, W.F. Pritchard

150

Failure Analysis and Materials Characterization of Hip Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.M. Bastidos, S.W. Stafford

154

Nano-Wear-Particulates Elicit a Size and Dose Dependent Response by RAW 264.7 Cells . . . . Mrinal K. Musib, Subrata Saha

158

Viscous Behavior of Different Concentrations of Bovine Calf Serum Used to Lubricate the Micro-textured CoCrMo Alloy Material before and after Wear Testing . . . . . . . . . . . . . . . . . . . . . . . Geriel Ettienne-Modeste, Timmie Topoleski Progressive Wear Damage Analysis on Retrieved UHMWPE Tibial Implants . . . . . . . . . . . . . . . . . N. Camacho, S.W. Stafford, L. Trueba Jr.

161 165

Tissue Engineering Gum Arabic-Chitosan Composite Biopolymer Scaffolds for Bone Tissue Engineering . . . . . . . . . . R.A. Silva, P. Mehl, O.C. Wilson Modification of Hydrogel Scaffolds for the Modulation of Corneal Epithelial Cell Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L.G. Reis, P. Pattekari, P.S. Sit

171

175

Making of Functional Tissue Engineered Heart Valve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.S. Patel, Y.S. Morsi

180

Ties That Bind: Evaluation of Collagen I and α-Chitin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tiffany Omokanwaye, Otto Wilson Jr.

183

Chitosan/Poly (-Caprolactone) Composite Hydrogel for Tissue Engineering Applications . . . . Xia Zhong, Chengdong Ji, Sergei G. Kazarian, Andrew Ruys, Fariba Dehghani

188

Disease Modeling Modeling and Control of HIV by Computational Intelligence Techniques . . . . . . . . . . . . . . . . . . . . . N. Bazyar Shourabi

192

Mathematical Modeling of Ebola Virus Dynamics as a Step towards Rational Vaccine Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sophia Banton, Zvi Roth, Mirjana Pavlovic

196

Respiratory Impedance Values in Adults Are Relatively Insensitive to Mead Model Lung Compliance and Chest Wall Compliance Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bill Diong, Michael D. Goldman, Homer Nazeran

201

A Systems Biology Model of Alzheimer’s Disease Incorporating Spatial-temporal Distribution of Beta Amyloid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.R. Kyrtsos, J.S. Baras

204

Table of Contents

A Mathematical Model of the Primary T Cell Response with Contraction Governed by Adaptive Regulatory T Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.N. Wilson, P. Lee, D. Levy

XI

209

A Mathematical Model for Microenvironmental Control of Tumor Growth . . . . . . . . . . . . . . . . . . . A.R. Galante, D. Levy, C. Tomasetti

213

Assessing the Usability of Web-Based Personal Health Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedro Gonzales, Binh Q. Tran

217

Drug Delivery Real Time Monitoring of Extracellular Glutamate Release in Rat Ischemia Model Treated by Nimodipine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.K. Park, G.J. Lee, S.K. Choi, S. Choi, S.W. Kang, S.J. Chae, H.K. Park Targeted Delivery of Doxorubicin by PLGA Nanoparticles Increases Drug Uptake in Cancer Cell Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tingjun Lei, Supriya Srinivasan, Yuan Tang, Romila Manchanda, Alicia Fernandez-Fernandez, Anthony J. Mcgoron

221

224

Cellular Uptake and Cytotoxicity of a Novel ICG-DOX-PLGA Dual Agent Polymer Nanoparticle Delivery System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Romila Manchanda, Tingjun Lei, Yuan Tang, Alicia Fernandez-Fernandez, Anthony J. McGoron

228

Electrospray – Differential Mobility Analysis (ES-DMA) for Characterization of Heat Induced Antibody Aggregates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suvajyoti Guha, Joshua Wayment, Michael J. Tarlov, Michael R. Zachariah

232

Mechanisms of Poly(amido amine) Dendrimer Transepithelial Transport and Tight Junction Modulation in Caco-2 Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.S. Goldberg, P.W. Swaan, H. Ghandehari

236

Absorbable Coatings: Structure and Drug Elution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Sarkar Das, M.K. McDermott, A.D. Lucas, T.E. Cargal, L. Patel, D.M. Saylor, D.V. Patwardhan

240

Special Topics A Brief Comparison of Adaptive Noise Cancellation, Wavelet and Cycle-by-Cycle Fourier Series Analysis for Reduction of Motional Artifacts from PPG Signals . . . . . . . . . . . . . . . . . . . . . . . . M. Malekmohammadi, A. Moein

243

Respiratory Resistance Measurements during Exercise Using the Airflow Perturbation Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Chapain, A. Johnson, J. Vossoughi, S. Majd

247

Comparison of IOS Parameters to aRIC Respiratory System Model Parameters in Normal and COPD Adults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Mangum, Bill Diong, Michael D. Goldman, Homer Nazeran

251

Effect of Waveform Shape and Duration on Defibrillation Threshold in Rabbit Hearts . . . . . . . . J. Stohlman, F. Aguel, G. Calcagnini, E. Mattei, M. Triventi, F. Censi, P. Bartolini, V. Krauthamer

254

XII

Table of Contents

The Measurement and Processing of EEG Signals to Evaluate Fatigue . . . . . . . . . . . . . . . . . . . . . . . . M.R. Yousefi Zoshk, M. Azarnoosh

258

Modeling for the Impact of Anesthesia on Neural Activity in the Auditory System . . . . . . . . . . . Z.B. Tan, L.Y. Wang, H. Wang, X.G. Zhang, J.S. Zhang

262

Cortical Excitability Changes after Repetitive Self-regulated vs. Tracking Movements of the Hand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.B. Godfrey, P.S. Lum, C.N. Schabowsky, M.L. Harris-Love What the ENT Wants in the OR: Bioengineering Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.A. Depireux, D.J. Eisenman An in vitro Biomechanical Comparison of Human Dermis to a Silicone Biosimulant Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.D. Wing, H.A. Conner, P.J. Biermann, S.M. Belkoff

266 270

274

Telemetric Epilepsy Monitoring and Seizures Aid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Hameed, F. Azhar, I. Shahrukh, M. Muzammil, M. Aamair, D. Mujeeb

278

Spike Detection for Integrated Circuits: Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Sarje, P. Abshire

282

Effect of Ambient Humidity on the Electrical Conductance of a Titanium Oxide Coating Being Investigated for Potential Use in Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jorge Torres, James Sweeney, Jose Barreto Brain Computer Interface in Cerebellar Ataxia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.I. Newman, S.H. Ying, Y.-S. Choi, H.-N. Kim, A. Presacco, M.V. Kothare, N.V. Thakor

286 289

Biosensors Effects of Stray Field Distribution Generated by Magnetic Beads on Giant Magnetoresistance Sensor for Biochip Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kyung Sook Kim, Samjinn Choi, Gi Ja Lee, Dong Hyun Park, Jeong Hoon Park, Il Sung Jo, Hun-Kuk Park Electrostatic Purification of Nucleic Acids for Micro Total Analysis Systems . . . . . . . . . . . . . . . . . . E. Hoppmann, I.M. White

293

297

Applicability of Surface Enhanced Raman Spectroscopy for Determining the Concentration of Adenine and S-Adenosyl hom*ocysteine in a Microfluidic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Omar Bekdash, Jordan Betz, Yi Cheng, Gary W. Rubloff

301

Integration of Capillary Ring Resonator Biosensor with PDMS Microfluidics for Label-Free Biosensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Farnoosh Farahi, Ian White

305

Surface Plasmon-Coupled Emission from Rhodamine- 6G Aggregates for Ratiometric Detection of Ethanol Vapors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Sai Sathish, Y. Kostov, G. Rao

309

Table of Contents

XIII

Formation of Dendritic Silver Substrates by Galvanic Displacement for Surface Enhanced Raman Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jordan Betz, Yi Cheng, Omar Bekdash, Susan Buckhout-White, Gary W. Rubloff

313

High Specificity Binding of Lectins to Carbohydrate Functionalized Etched Fiber Bragg Grating Optical Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geunmin Ryu, Mario Dagenais, Matthew T. Hurley, Philip DeShong

317

Oximetry Oximetry and Blood Flow in the Retina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Lemaillet, A. Lompado, D. Duncan, Q.D. Nguyen, J.C. Ramella-Roman

321

Monitoring and Controlling Oxygen Levels in Microfluidic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . Peter C. Thomas, Srinivasa R. Raghavan, Samuel P. Forry

325

An Imaging Pulse Oximeter Based on a Multi-Aperture Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ali Basiri, Jessica C. Ramella-Roman

329

Fluorescent Microparticles for Sensing Cell Microenvironment Oxygen Levels within 3D Scaffolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miguel A. Acosta, Jennie B. Leach

332

Determination of in vivo Blood Oxygen Saturation and Blood Volume Fraction Using Diffuse Reflectance Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Chen, W. Lin

336

Image Analysis Fredholm Integral Equations in Biophysical Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Schuck High-Resolution Autofluorescence Imaging for Mapping Molecular Processes within the Human Retina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Ehler, Zigurts Majumdar, Emily King, Julia Dobrosotskaya, Emily Chew, Wai Wong, Denise Cunningham, Wojciech Czaja, Robert F. Bonner

340

344

Local Histograms for Classifying H&E Stained Tissues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.L. Massar, R. Bhagavatula, M. Fickus, J. Kovaˇcevi´c

348

Detecting and Classifying Cancers from Image Data Using Optimal Transportation . . . . . . . . . . . G.K. Rohde, W. Wang, D. Slepcev, A.B. Lee, C. Chen, J.A. Ozolek

353

Nanoscale Imaging of Chemical Elements in Biomedicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.A. Aronova, Y.C. Kim, A.A. Sousa, G. Zhang, R.D. Leapman

357

Sparse Representation and Variational Methods in Retinal Image Processing . . . . . . . . . . . . . . . . . J. Dobrosotskaya, M. Ehler, E. King, R. Bonner, W. Czaja

361

XIV

Table of Contents

Neuromechanics & Rehabilitation Optimization and Validation of a Biomechanical Model for Analyzing Running-Specific Prostheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brian S. Baum, Roozbeh Borjian, You-Sin Kim, Alison Linberg, Jae Kun Shim

365

Prehension Synergy: Use of Mechanical Advantage during Multi-finger Torque Production on Mechanically Fixed- and Free-Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaebum Park, You-Sin Kim, Brian S. Baum, Yoon Hyuk Kim, Jae Kun Shim

368

Modeling, Optimizing & Monitoring Investigating Vortex Ring Propagation Speed Past Prosthetic Heart Valves: Implications for Assessing Valve Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ann Bailey, Michelle Beatty, Olga Pierrakos

372

Transient Heat Transfer in a Dental Prosthesis Implanted in Mandibular Bone . . . . . . . . . . . . . . . M.N. Ashtiani, R. Imani

376

Characterization of Material Properties of Aorta from Oscillatory Pressure Tests . . . . . . . . . . . . . V.V. Romanov, K. Darvish, S. Assari

380

Quasi-static Analysis of Electric Field Distributions by Disc Electrodes in a Rabbit Eye Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Minnikanti, E. Cohen, N. Peixoto Optimizing the Geometry of Deep Brain Stimulating Electrodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.Y. Zhang, W.M. Grill Exploratory Parcellation of fMRI Data Based on Finite Mixture Models and Self-Annealing Expectation Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Maleki Balajoo, G.A. Hossein-Zadeh, H. Soltanian-Zadeh Computational Fluid Dynamic Modeling of the Airflow Perturbation Device . . . . . . . . . . . . . . . . . . S. Majd, J. Vossoughi, A. Johnson

385 389

393 397

Biomaterials Mechanism and Direct Visualization of Electrodeposition of the Polysaccharide Chitosan . . . . . Yi Cheng, Xiaolong Luo, Jordan Betz, Omar Bekdash, Gary W. Rubloff

401

Chito-Cotton: Chitosan Coated Cotton-Based Scaffold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . O. Agubuzo, P. Mehl, O.C. Wilson, R. Silva

404

Effects of Temperature on the Performance of Footwear Foams: Review of Developments . . . . . M.R. Shariatmadari, R. English, G. Rothwell

409

A Tissue Equivalent Phantom of the Human Torso for in vivo Biocompatible Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David M. Peterson, Walker Turner, Kevin Pham, Hong Yu, Rizwan Bashirullah, Neil Euliano, Jeffery R. Fitzsimmons

414

Table of Contents

XV

Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tiffany Omokanwaye, Donae Owens, Otto Wilson Jr.

418

Neural Stem Cell Differentiation in 2D and 3D Microenvironments . . . . . . . . . . . . . . . . . . . . . . . . . . . A.S. Ribeiro, E.M. Powell, J.B. Leach

422

A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms . . . . . . . . . . . . . . . . . . . . . . . . . M.T. Meyer, V. Roy, W.E. Bentley, R. Ghodssi

426

Conduction Properties of Decellularized Nerve Biomaterials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.G. Urbanchek, B.S. Shim, Z. Baghmanli, B. Wei, K. Schroeder, N.B. Langhals, R.M. Miriani, B.M. Egeland, D.R. Kipke, D.C. Martin, P.S. Cederna

430

Reverse Cholesterol Transport (RCT) Modeling with Integrated Software Configurator . . . . . . S. Adhikari

434

Biomechanics Modeling Linear Head Impact and the Effect of Brain-Skull Interface . . . . . . . . . . . . . . . . . . . . . . . . . K. Laksari, S. Assari, K. Darvish

437

Mechanics of CSF Flow through Trabecular Architecture in the Brain . . . . . . . . . . . . . . . . . . . . . . . . Parisa Saboori, Catherine Germanier, Ali Sadegh

440

Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries . . . . . . . . . . . . . . . . . M. Zoghi-Moghadam, P. Saboori, A. Sadegh

444

Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.D. Untaroiu, Q. Zhang, A.M. Damon, J.R. Crandall, K. Darvish, G. Paskoff, B.S. Shender

448

Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-Scale Investigations . . . . . . . . . . . N.A. White, C.S. Shah, W.N. Hardy

452

Head Impact Response: Pressure Analysis Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R.T. Cotton, P.G. Young, C.W. Pearce, L. Beldie, B. Walker

456

Imaging An Introduction to the Next Generation of Radiology in the Web 2.0 World . . . . . . . . . . . . . . . . . A. Moein, M. Malekmohammadi, K. Youssefi

459

Novel Detection Method for Monitoring of Dental Caries Using Single Digital Subtraction Radiography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.H. Park, Y.S. Choi, G.J. Lee, S. Choi, K.S. Kim, D.H. Park, I. Cho, H.K. Park

463

Targeted Delivery of Molecular Probes for in Vivo Electron Paramagnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.R. Burks, E.D. Barth, S.S. Martin, G.M. Rosen, H.J. Halpern, J.P.Y. Kao

466

XVI

Table of Contents

New Tools for Image-Based Mesh Generation of 3D Imaging Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.G. Young, D. Raymont, V. Bui Xuan, R.T. Cotton Characterization of Speed and Accuracy of a Nonrigid Registration Accelerator on Pre- and Intraprocedural Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Raj Shekhar, William Plishker, Sheng Xu, Jochen Kruecker, Peng Lei, Aradhana Venkatesan, Bradford Wood

470

473

Assessment of Kidney Structure and Function Using GRIN Lens Based Laparoscope with Optical Coherence Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.W. Chen, J. Wierwille, M.L. Onozato, P.M. Andrews, M. Phelan, J. Borin, Y. Chen

477

Reliability of Structural Equation Modeling of the Motor Cortex in Resting State Functional MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Kavallappa, S. Roys, A. Roy, J. Greenspan, R. Gullapalli, A. McMillan

481

Quantitative Characterization of Radiofrequency Ablation Lesions in Tissue Using Optical Coherence Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Wierwille, A. McMillan, R. Gullapalli, J. Desai, Y. Chen

485

Clinically Relevant Hand Held Two Lead EEG Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.M. O’Brien, R.L. Elliott A Simple Structural Magnetic Resonance Imaging (MRI) Method for 3D Mapping between Head Skin Tattoos and Brain Landmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mulugeta Semework Frame Potential Classification Algorithm for Retinal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . John J. Benedetto, Wojciech Czaja, Martin Ehler Raman-AFM Instrumentation and Characterization of SERS Substrates and Carbon Nanotubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Q. Vu, M.H. Zhao, E. Wellner, X. Truong, P.D. Smith, A.J. Jin

489

493

496

500

A Novel Model of Skin Electrical Injury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thu T.A. Nguyen, Ali Basiri, J.W. Shupp, A.R. Pavlovich, M.H. Jordan, Z. Sanford, J.C. Ramella-Roman

504

Design, Construction, and Evaluation of an Electrical Impedance Myographer . . . . . . . . . . . . . . . . K. Lweesy, L. Fraiwan, D. Hadarees, A. Jamil, E. Ramadan

508

The Role of Imaging Tools in Biomedical Research: Preclinical Stent Implant Study . . . . . . . . . . W.F. Pritchard, M. Kreitz, O. Lopez, D. Rad, B. McDowell, S. Nagaraja, M.L. Dreher, J. Esparza, J. Vossoughi, O.A. Chiesa, J.W. Karanian

512

Hard Tissue and Posture Optimization of Screw Positioning in Mandible during Bilateral Sagittal Split Osteotomy Using Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Raeisi Najafi, A. Pashaei, S. Majd, I. Zoljanahi Oskui, B. Bohluli

516

Table of Contents

XVII

Extraction and Characterization of a Soluble Chicken Bone Collagen . . . . . . . . . . . . . . . . . . . . . . . . . Tiffany Omokanwaye, Otto Wilson Jr., Hoda Iravani, Pramodh Kariyawasam

520

A Model for Human Postural Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yao Li, William S. Levine

524

Development of an Average Chest Shape for Objective Evaluation of the Aesthetic Outcome in the Nuss Procedure Planning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.J. Rechowicz, R. Kelly, M. Goretsky, F. Frantz, S. Knisley, D. Nuss, F.D. McKenzie

528

Sickle Cell and Blood Cell Sickle Hemoglobin Fiber Growth Rates Revealed by Optical Pattern Generation . . . . . . . . . . . . . Z. Liu, A. Aprelev, M. Zakharov, F.A. Ferrone

532

Sickle Cell Occlusion in Microchannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Aprelev, W. Stephenson, H. Noh, M. Meier, M. MacDermott, N. Lerner, F.A. Ferrone

536

Engineering Microfluidics Based Technologies for Rapid Sorting of White Blood Cells . . . . . . . . Vinay Raj, Kranthi Kumar Bhavanam, Vahidreza Parichehreh, Palaniappan Sethu

540

Peripheral Arterial Tonometry in Assessing Endothelial Dysfunction in Pediatric Sickle Cell Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.M. Sivamurthy, C. Dampier, M. MacDermott, M. Meier, M. Cahill, L.L. Hsu

544

Comparison of Shear Stress, Residence Time and Lagrangian Estimates of Hemolysis in Different Ventricular Assist Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.H. Fraser, M.E. Taskin, T. Zhang, B.P. Griffith, Z.J. Wu

548

Cancer Drug Resistance Always Depends on the Turnover Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Tomasetti, D. Levy Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Lweesy, L. Fraiwan, M. Al-Shalabi, L. Mohammad, R. Al-Oglah

552

556

The Dr. James W. Jacobson Symposium on Technologies for Cancer Diagnostics Clinical Applications of Multispectral Imaging Flow Cytometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Minderman, T.C. George, K.L. O’Loughlin, P.K. Wallace

560

Multispectral Imaging, Image Analysis, and Pathology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Richard M. Levenson

564

XVIII

Table of Contents

Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection . . . . . . . H. Ben Hsieh, George Somlo, Robyn Bennis, Paul Frankel, Robert T. Krivacic, Sean Lau, Janey Ly, Erich Schwartz, Richard H. Bruce

568

Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays . . . . . . . . . . J. Cuiffi, R. Soong, S. Manolakos, S. Mohapatra, D. Larson

572

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

577

Keyword Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

581

Traumatic Brain Injury in Rats Caused by Blast-Induced Hyper-Acceleration G. Fiskum1, J. Hazelton1, R. Gullapalli2, and W.L. Fourney3 1

University of Maryland School of Medicine, Dept. of Anesthesiology and the Shock, Trauma, and Anesthesiology Research Center (STAR) 2 University of Maryland School of Medicine Dept. of Diagnostic Radiology 3 University of Maryland School of Engineering, Dept. of Mechanical Engineering and the Center of Energetics Concepts Development Abstract— Well over 100,000 U.S. warfighters in Iraq and Afghanistan have sustained some form of traumatic brain injury. Most of these injuries have been due to exposure to blasts. Of these victims, approximately 20% have been passengers within vehicles that were targets of roadside improvised explosive devices. The hyper-acceleration experienced by these victims can result in exposure to g-forces much greater than those that cause loss of consciousness, a clinical symptom of mild traumatic brain injury. We have developed an experimental paradigm to study the effects of blast-induced hyperacceleration on laboratory rats to gain insight into mechanisms responsible for brain injury. Our hypothesis is that g-forces in the range of 20 – 40 g can induce mild brain injury without causing other injuries that are lethal. The preliminary results of brain histology measurements that probe for the degeneration or structural disorganization of neurons support this hypothesis. The significance of these studies is that they could eventually lead to improved designs of military vehicles that better protect against blast-induced neurologic injury. Moreover, the use of accelerometers and other sensors in these experiments could establish thresholds of forces that cause brain injury. Finally, experimental drugs and other conditions could be tested in this paradigm to identify neuroprotective interventions that are specifically effective against blastinduced traumatic brain injury. Keywords— Explosion, acceleration, traumatic brain injury, neuron.

I. INTRODUCTION A form of complex traumatic brain injury (TBI) has been identified in armed forces and civilians in Iraq and Afghanistan [1,2]. Approximately 25% of all combat casualties in these military conflicts are caused by TBI, with most of these head injuries caused by explosive munitions such as bombs, land mines, improvised explosive devices and missiles [Defense & Veterans Brain Injury Center web site, www.dvbic.org]. The majority of experimental data has focused on one aspect of these explosions, the blast overpressure [3,4]. Most of these studies used a model in which an air-driven pressure wave was delivered via a long shock tube, either directly to the immobilized animal’s head or body. Very few causative pathologic mechanisms to explain

the CNS injury in this model have been identified, and those identified have had limited description. It has become apparent that blast overpressure is not the only factor in complex, explosive related closed head injuries. A multitude of physical forces play a role, including blast overpressure, thermal and chemical components, shockwave, and hyper-acceleration of the brain. We hypothesize that this extreme hyper-acceleration, with subsequent rapid deceleration, could be responsible for many aspects of brain injury. This may be especially true for the large number of soldiers injured while driving light armored vehicles over improvised explosive devices, as well as for pedestrians injured in the vicinity of large explosions. The marked effects of rapid acceleration, or g force (Gz), on the brain have been studied in other models related to flight acceleration. These studies use centrifuge exposure (+4-14 Gz) in rats, and have shown diffuse neuronal degeneration and indicators of cell death throughout the brain [5,6]. Similar histologic changes were seen in neurons and other brain cells in the brains of rhesus monkeys exposed to graded Gz loading (+15-21 Gz) [7]. In addition to histologic cellular changes, investigators have noted significant shearing stresses on blood vessels, which could cause vessel collapse and subsequent restricted blood flow [8]. One study involving graded Gz load (+5-20 Gz) found depression of cerebral energy metabolism that correlated with increasing Gz force [9]. Importantly, acceleration exposure resulted in significant learning deficits in rats [10]. It is interesting to note that these studies of acceleration effects on the brain used Gz of a much smaller scale than soldiers experience during a war related explosion. The Dynamic Effects Laboratory at UMCP has used small scale testing to evaluate the loads applied to personnel carriers when a buried explosive detonates beneath them [11,12]. Conditions in these small scale explosions proved to be extremely reproducible, and very similar to the parameters observed in full scale testing of explosions at the Army Research Laboratory in Aberdeen, Maryland. Adaptation and scaling of this model to allow animal injury in a similar explosive environment could provide a completely new, clinically relevant model of blast TBI that encompasses many of the physical forces including the extreme hyper-acceleration. Ultimately, use of this model could

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 1–4, 2010. www.springerlink.com

2

G. Fiskum et al.

allow rapid testing of neuroprotective strategies, with eventual confirmation in a large animal model. This could lead to the discovery of neuroprotective strategies for the many warfighters and civilians suffering mild to severe brain injury. As a first step toward these goals, energetics experts at the University of Maryland School of Engineering in collaboration with neuroscientists at the University of Maryland School of Medicine have performed preliminary experiments demonstrating that blast induced hyperacceleration can cause mild TBI in laboratory rats at Gz between 20 and 30 that do not cause lethal injury to other organs.

travel vertically up to 15 in guided by poles located in holes in each corner of the plate. The two cylinders secured to the top of the plate house the anesthetized rats that are wrapped in a thick cotton “blanket” to minimize movement within the cylinders. The cylinders are capped to prevent exposure of the rats from the pressures ( 22, p < 10-4 for both hemispheres) but not the stimulus bandwidth. There is a significant interaction between the effect of AM rate and the effect of carrier

IFMBE Proceedings Vol. 32

Magnetoencephalography and Auditory Neural Representations

Fig. 1 Dipole strengths of MEG responses averaged over subjects. Error bars are standard error over subjects bandwidth (F(12, 279) > 2.04, p < 0.03) in the right hemisphere.

47

Fig. 2 Analysis of the power of the SSR at fAM

. The MTF calculated as a function of the corrected power of the SSR at fAM and stimulus fAM. Each gray hollow circle represents the corrected power of the SSR at fAM for one subject. The black line marked by triangles shows the grand averaged MTF. The gray line is the optimal linear fit of the MTF

B. Experiment 2 An MEG response at the stimulus fAM is observed in all stimulus conditions.. The MTF measured by the power of the MEG response at fAM has a low-pass pattern: the power of the MEG response to an AM sound decreases with increasing fAM of that sound. It needs to be clarified, however, whether the low-pass pattern of the MTF results from stimulus-driven SSR or background noise. One can estimate the power of stimulus-driven SSR by subtracting the estimated power of background noise at fAM from the power of measured MEG signal at fAM. The MTF measured by this corrected power of the SSR at fAM still shows a low-pass pattern and can be modeled as a linear function of fAM measured in Hz (Fig. 2). The slope of the fitted linear function is -0.96 dB/Hz (99% confidence interval, -1.17 to -0.74 dB/Hz). For fAM higher than 1 Hz, the slope of the MTF can also be fitted as -3.6 dB/oct (99% confidence interval, -4.8 to -2.5 dB/oct ). Since the slope of the fitted line is significantly negative (p < 0.01), the low-pass pattern of the MTF is statistically significant for fAM lower than 15 Hz. To reduce subject-to-subject variability, the corrected power is normalized before the regression analysis. Even without any correction or normalization, the slope of the MTF is still significantly negative (99% confidence interval, -1.73 dB/Hz to -0.29 dB/Hz). To investigate whether the reduction in the evoked power of the MEG response at fAM is due to a loss of energy in every single trial or a loss of phase locking over trials, we calculated the phase coherence value [20] of the MEG response at fAM over trials. One way ANOVA shows the phase coherence values does not significantly change when the stimulus fAM increases from 0.7 Hz to 13.8 Hz (F(5,54) = 0.84, p > 0.5). Hence, the

low-pass pattern of evoked power of SSR at fAM is due to a change in single trial power rather than a change in over trial phase coherence. Since both the neural response power and background noise power are strongest at low frequencies, regression analysis was used to show that the signal to noise ratio of the neural response at fAM does not significantly increase or decrease when fAM increases (p > 0.6). If the neural response power at fAM, 2fAM and 3fAM are combined, the MTF has a slope of -1.06 dB/Hz (99% confidence interval, -1.30 dB/Hz to -0.82 dB/Hz).

Fig. 3 Analysis of the instantaneous amplitude of the SSR at fFM . The solid black line with triangle markers is the MTF averaged over all subjects. The solid gray line is the optimal linear fit of the MTF while the dotted gray line with white square markers is the MTF predicted a model. The power of the instantaneous amplitude for each subject and each condition is shown as a gray hollow circle. The instantaneous amplitude’s power at fAM is plotted as the dashed gray line

IFMBE Proceedings Vol. 32

48

J.Z. Simon and N. Ding

One of the primary goals of this work is to examine the interaction between fast modulations and slow modulations. Since the instantaneous amplitude of the SSR at fFM oscillates with fundamental frequency fAM, it is a neural correlate of the stimulus slow AM. Consequently, the relation between the power of the instantaneous amplitude and the stimulus fAM can also be regarded as an effective MTF. We estimate the power of the instantaneous amplitude as the sum of the power at the first four harmonics of fAM. This MTF (Fig. 3) has a slope of -0.72 dB/Hz (99% confidence interval, -0.95 to -0.49 dB/Hz). When fAM is higher than 1 Hz, the slope of the MTF can also be fitted as -3.0 dB/oct (99% confidence interval, -5.0 to -1.6 dB/oct). For this MTF calculation, the power at each harmonic of fAM was corrected by subtracting the power of background noise at that frequency; the estimate of the power of the instantaneous amplitude is also normalized to reduce subject-to-subject variability Without any correction or normalization, the MTF slope is still significantly negative (p < 0.01). As the stimulus fAM increases, the power of the MEG response at fFM decreases at 0.86 dB/oct while the power of the instantaneous amplitude of the SSR at fFM decreases 3.0 dB/oct. If the SSR at fFM is assumed to be sinusoidally amplitude modulated, the neural AM modulation depth of the SSR can be estimated based on the ratio between the power of the instantaneous amplitude of the SSR and the power of the MEG response at fFM . Hence, with the sinusoidal AM assumption, the neural AM modulation depth should decreases at 2.1 dB/oct.

IV. CONCLUSIONS First, this study characterizes the properties of MEG responses to AM below 30 Hz. The SSR is strongest at the lowest modulation rates and decreases 2-4 dB per octave. For jointly modulated stimuli, the instantaneous amplitude of the SSR at fFM also oscillates with fundamental frequency fAM. Due to these neural interactions, the information in slow AM is simultaneously encoded in neural oscillations at fAM and fFM .

2. van Zanten GA, Senten CJ (1983) Spectro-temporal modulation transfer function (STMTF) for various types of temporal modulation and a peak distance of 200 Hz. J Acoust Soc Am 74:52-62 3. Chi T, Gao Y, Guyton MC, Ru P, Shamma S (1999) Spectro-temporal modulation transfer functions and speech intelligibility. J Acoust Soc Am 106:2719-2732 4. Steeneken HJ, Houtgast T (1980) A physical method for measuring speech-transmission quality. J Acoust Soc Am 67:318-326 5. Drullman R, Festen JM, Plomp R (1994) Effect of temporal envelope smearing on speech reception. J Acoust Soc Am 95:1053-1064 6. Zeng FG, Nie K, Stickney GS, Kong YY, Vongphoe M, Bhargave A, Wei C, Cao K (2005) Speech recognition with amplitude and frequency modulations. Proc Natl Acad Sci U S A 102:2293-2298 7. Hamalainen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV (1993) Magnetoencephalography - Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain. Reviews of Modern Physics 65:413-497 8. Picton TW, John MS, Dimitrijevic A, Purcell D (2003) Human auditory steady-state responses. Int J Audiol 42:177-219 9. Ross B, Borgmann C, Draganova R, Roberts LE, Pantev C (2000) A high-precision magnetoencephalographic study of human auditory steady-state responses to amplitude-modulated tones. J Acoust Soc Am 108:679-691 10. Schoonhoven R, Boden CJ, Verbunt JP, de Munck JC (2003) A whole head MEG study of the amplitude-modulation-following response: phase coherence, group delay and dipole source analysis. Clin Neurophysiol 114:2096-2106 11. Galambos R, Makeig S, Talmachoff PJ (1981) A 40-Hz auditory potential recorded from the human scalp. Proc Natl Acad Sci U S A 78:2643-2647 12. Oldfield RC (1971) The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 9 (1), 97-113 13. de Cheveigné A, Simon JZ (2007) Denoising based on time-shift PCA. J. Neurosci. Methods 165 (2), 297-305 14. de Cheveigné A, Simon JZ (2008a) Sensor noise suppression. J. Neurosci. Methods 168 (1), 195-202 15. de Cheveigné A, Simon JZ (2008b) Denoising based on spatial filtering. J. Neurosci. Methods 171 (2), 331-339 16. Sarvas J (1987) Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem. Phys. Med. Biol. 32, 11-22 17. Mosher JC, Baillet S, Leahy RM (2003) Equivalence of linear approaches in bioelectromagnetic inverse solutions. IEEE Workshop on Statistical Signal Processing, St. Louis 18. Uutela K, Hamalainen M, Salmelin R (1998) Global optimization in the localization of neuromagnetic sources. IEEE Trans. Biomed. Eng. 45 (6), 716-723 19. Liegeois-Chauvel C, Lorenzi C, Trebuchon A, Regis J, Chauvel P (2004) Temporal Envelope Processing in the Human Left and Right Auditory Cortices. Cereb Cortex 14:731-740 20. Fisher NI (1993) Statistical analysis of circular data. Cambridge [England] ; New York, NY, USA: Cambridge University Press

ACKNOWLEDGMENTS We thank Max Ehrman and Jeff Walker for excellent technical support. This research was supported by the National Institutes of Health (NIH) grant R01DC008342.

REFERENCES 1. Viemeister NF (1979) Temporal modulation transfer functions based upon modulation thresholds. J Acoust Soc Am 66:1364-1380

Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Jonathan Z. Simon Electrical & Computer Engineering University of Maryland College Park, MD 20815 USA [emailprotected]

Voice Pitch Processing with Cochlear Implants Monita Chatterjee1, Shu-Chen Peng2, Lauren Wawroski3, and Cherish Oberzut1 1 2

Cochlear Implants and Psychophysics Lab, Dept.of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA Division of Ophthalmic and Ear, Nose and Throat Devices, Office of Device Evaluation, Center for Devices and Radiologic Health, US Food and Drug Administration, Silver Spring, MD, USA 3 Children’s National Medical Center, Washington, DC, USA

Abstract— Cochlear implants today allow many severe-toprofoundly hearing-impaired individuals to hear and understand speech in everyday settings. However, the transmission of voice pitch information via the device is severely limited. Other than limiting their appreciation of music, the lack of pitch information means that cochlear implant patients have difficulty with speaker/gender recognition, intonation and emotion perception, all of which limit speech communication in everyday life. In electrical stimulation via cochlear implants, the fine spectral detail necessary for conveying the harmonic structure of F0, is not available. Although the spectral cues for pitch are lost, the temporal periodicity cue for pitch may still be available to the listener after speech processing. Our previously published results indicate that adult cochlear implant listeners are sensitive to this periodicity cue and are able to use it in a voice-pitch-based intonation identification task. Ongoing experiments also suggest that different mechanisms may play a role in processing the temporal pitch cue when multiple channels are concurrently stimulated, rather than when a single channel is stimulated. Initial experiments with primary-schoolaged children who were implanted before the age of five, indicate no significant differences between them and their normally hearing peers in performance in the intonation identification task. This suggests that cochlear implants can benefit at least some children with severe-to-profound hearing loss in voice-pitch processing, and points to the potential role of neural plasticity in adaptation to cochlear implants. Keywords— cochlear implants, voice pitch, modulation, children, intonation.

I. INTRODUCTION In normal hearing, the primary cue for the auditory perception of voice pitch, or fundamental frequency (F0) of spoken utterances, is provided by the detailed harmonic structure found in the acoustic spectrum. Normal cochlear filters are able to represent the harmonics within the voice pitch range with excellent resolution. Thus, the human auditory system is able to function with remarkable precision in tasks involving speaker identification, music processing, speech intonation processing, tonal language perception, and emotion recognition, all of which play major roles in our everyday communication. It has been shown that the

temporal periodicity of the signal, which also contains information about the fundamental frequency, can be utilized by the auditory system in extracting voice pitch; however, this pitch is perceptually not as salient as spectrally determined pitch. In cochlear implants, the primary pitch information available to listeners arises from the temporal periodicity in the envelope: therefore, cochlear implant patients do not have access to the salient pitch information that is so important for speech and music perception. The spectral detail in the peripheral (auditory nerve level) representation is primarily lost to broad spatial fields and large amounts of channel interaction. There is no explicit coding of pitch in the speech processing strategies employed in present-day devices. The processor performs a frequency analysis of the signal, extracts the time-varying envelope from each channel, and stimulates the different electrodes of the implanted array with current pulse trains modulated by the extracted envelope from tonotopically appropriate frequency bands. Pitch information is present to varying degrees in the extracted envelope, depending upon the degree of lowpass filtering applied in the processing stages and the carrier rates of the modulated pulse trains. It is thus apparent that the ability to process amplitude modulations, and to discriminate between different rates of amplitude modulation, is necessary for CI listeners to process the available pitch information in the signal envelope. Here, we present our recent work investigating listeners’ ability to discriminate between temporal modulation patterns in a psychophysical task, as well as measures of more real-world performance in a speech intonation identification task.

II. EXPERIMENTAL FINDINGS A. Sensitivity to Temporal Patterns Is Correlated with Performance in F0-Based Speech Intonation In a recent study [1] we showed that adult CI users were able to use F0 cues to determine whether an utterance was question-like or statement-like. The utterance was the word “popcorn”, which had been resynthesized to have 360 different combinations of initial F0 (120 Hz and 200 Hz – 2

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 49–52, 2010. www.springerlink.com

50

M. Chatterjee et al.

B. Sensitivity to Temporal Patterns in Multichannel Stimulation This experiment was conducted to measure amplitude modulation rate discrimination thresholds in CI listeners, in the presence of competing signals on other channels. The signal was applied to one channel, in the presence of “maskers” (competing signals) presented concurrently on two flanking electrodes. The flanking electrodes were presented either one, two, three, or four electrodes away from the signal electrode. The experiment was conducted with users of the Freedom or N-24 CI (manufactured by Cochlear Corporation). Methods 1. Participants: Eight adult CI users participated in these experiments. All were users of devices manufactured by Cochlear Corporation.

2. Stimuli: A custom research interface was used to deliver controlled electrical stimuli directly to the specific electrodes in the patient’s implanted device. Stimuli were trains of biphasic, charge-balanced current pulses, presented with a carrier pulse rate of 2000 pulses/second. The signal was always presented on electrode 18 (an apical electrode in the cochlea). All stimuli were 300 ms in length. Following the standard interleaved stimulation mode on these devices, the stimuli on the different channels/electrodes were presented concurrently but non-simultaneously with each other, with offset delays of ~ 0.17 ms. When modulated, pulse trains were sinusoidally amplitude modulated at specific rates. The masker channels were either steady state, or amplitude modulated. All modulation depths were fixed at 20% (ie, modulation index of 0.2). 3. Procedure: The listener’s task was to detect a difference between a reference modulation rate of 100 Hz and a (higher) comparison modulation rate. Discrimination thresholds were measured using a standard 3-interval, forced-choice psychophysical procedure with a 2-down, 1up, adaptive method. In each trial, two of the three intervals (randomly chosen) contained the “reference” signal modulated at the base rate, while the third interval contained the “target” signal, which was modulated at the higher rate. The rate of the target signal was adaptively modified using a 2-down, 1-up rule, converging at the 70.7% correct point on the psychometric function (Levitt, 1971). Alongside the signal to be discriminated, the fixed maskers were concurrently present in each interval. Results Analysis of the results indicated that masker electrode location did not appreciably influence the results. For the sake of simplicity, therefore, results shown here have been averaged across electrode locations. Figure 1 shows the results obtained when the maskers were coherently modulated at 100 Hz (the same reference rate as the signal), compared with results obtained in the unmasked condition. 25 Mean Weber fraction (%)

levels), F0 contours (rising, falling, flat – 9 levels), intensity patterns (increasing or decreasing from the first to the second syllable – 4 levels) and duration patterns (increasing or decreasing from the first to the second syllable – 5 levels). For the purposes of that particular study, the intensity and duration patterns served as random roves, and listeners’ attention to F0 was the focus of the analyses. The proportion of times the listener judged each sample of the word as a question, was plotted as a function of the change in F0 from beginning to end (in octaves) to obtain psychometric functions. These functions were converted into cumulative d’ scores, and these scores were compared across listener groups. Results showed that CI listeners’ performance was significantly poorer than NH listeners’ performance in the same task. When the NH listeners were subjected to spectrally degraded samples of the same stimuli, their performance declined to resemble that of CI listeners’. In a parallel psychophysical task, the CI listeners’ sensitivity to amplitude modulation rate discrimination was measured on a single electrode. The reference rate was fixed at different values across the voice pitch range (from 50 to 300 Hz), and a 3 interval, forced choice, adaptive procedure was used to obtain their modulation rate discrimination limens. The CI listeners’ performance on this task was found to be significantly correlated with their cumulative d’ scores on the intonation recognition task. These results suggest that temporal pattern sensitivity is important for CI listeners in their everyday experience with voice-pitchbased tasks in speech communication. The psychophysical experiments described above, however, were conducted with single-channel stimuli. In the experiment described below, results obtained with multi-channel stimulation are described.

20 15 10 5 0

100 Hz Mod. Masker

No masker

Masker Type

Fig. 1 Sensitivity to modulation rate in the presence of 100-Hz modulated maskers and no maskers. Error bars show +/-1 s.d

IFMBE Proceedings Vol. 32

Voice Pitch Processing with Cochlear Implants

51

The Weber fraction (ΔF/F, where ΔF is the justdetectable increment in modulation frequency), is plotted in percentage form. The lower the Weber fraction, the more sensitive the listener (ie, the smaller the change in modulation rate he/she can detect). It is apparent that the modulated maskers produced a lower Weber fraction than the unmasked condition. This difference was found to reach a moderate level of statistical significance in an ANOVA (F(1,13) =5.629, p = 0.034). In a further analysis, the effects of the relative phases of the masker modulators were measured. Results showed significantly enhanced sensitivity in the conditions in which the masker modulators were in phase with each other, but out of phase with the signal. The effects of other masker modulation rates were also examined. Results showed that the 100 Hz modulation rates on the maskers (ie, the same rate as the reference signal) produced the greatest sensitivity on the signal channel. Other masker modulation rates and types (8 Hz, 24 Hz, 134 Hz, and steady-state) either had no effect on the results, or produced interference. This is shown in Figure 2, which plots the mean Weber fraction (collapsed across masker electrode location) obtained with the different masker types. These results indicate that CI listeners’ sensitivity to temporal patterns can be strongly influenced by the presence of competing temporal envelopes on other channels. In particular, envelopes that are modulated at rates close to the reference rate, can cause enhancement. Underlying mechanisms are as yet unclear. The pulses on different channels were never simultaneous, but rather interleaved in time. Thus, simultaneous interactions between the electrical stimuli (such as beats) cannot be considered when interpreting these results.

60

Prosodic cues help children to learn spoken language [2, 3]. It is therefore of considerable interest to investigate to what extent early-implanted young children with CIs are able to perceive changes in voice pitch to detect aspects of prosody. Here, we present the results of an initial study conducted with a group of primary-school-aged children. The objective of the study was to quantify the sensitivity of normally-hearing (NH) and cochlear-implanted (CI) children who were 6-8 years of age and implanted before the age of five, to changes in the F0 contour. Methods 1. Participants: Twenty normally hearing and eight CI children participated in this study. All children were between 6 and 8 years of age. All CI children had been implanted before the age of 5 years. 2. Stimuli: these were a subset of the “popcorn” database described previously, chosen to have various F0 contours. Intensity and duration, which normally co-vary with F0, remained unchanged. Sounds were presented via loudspeaker at 65 dBA in a soundproof booth. 3. Procedure: The children indicated whether each sample of the utterance “popcorn” sounded like the speaker was “asking” or “telling”. For each F0 contour, a child heard eight samples of the word, four with an initial F0 of 120 Hz (male-sounding) and four with an initial F0 of 200 Hz (female-sounding). Results Figure 3 shows results obtained with the two groups of children, plotted as the proportion of samples judged to be question-like, against the F0-change (from the end of the sample to the beginning) in octaves. These psychometric functions were fitted with a three-parameter sigmoidal function. The data were converted into a cumulative d’ measure (a measure of sensitivity based on signal detection theory). 1

50 40 30 20 10 0

100

SS

8 SSpeak Masker Type

24

134

Proportio n Question

Mean Weber Fraction (%)

70

C. Voice Pitch Processing by Children

0.8

NH CI

0.6 0.4 0.2

Fig. 2 Sensitivity to modulation rate

in the presence of various types of maskers. Note that 100, 8, etc. denote 100 Hz, 8 Hz, etc. (modulation rate); SS denotes an unmodulated masker, and SSpeak denotes an unmodulated masker with amplitude at the peak of the corresponding modulated masker. Error bars show +/- 1 s.d

0 -1.5

-1

-0.5 0 0.5 F0 ch ange in octaves

1

1.5

Fig. 3 Results obtained with NH and CI children. Error bars indicate +/- 1 s.e

IFMBE Proceedings Vol. 32

52

M. Chatterjee et al.

A two-way, mixed ANOVA conducted on the data showed a significant effect of F0 change [F(4.7, 122.188) = 26.418, p 5 Hz RTMS has excitatory effects. These effects last on the order of 15 minutes with generally somewhat shorter times for the excitatory effects. The safety of different stimulation rates and duty cycles has been established [6, 7] with the principal side effect being induction of seizures. There is no evidence of any kindling-like phenomenon, so that even if seizures occur, epilepsy does not result. Because of a desire to minimize the amount of stimulation, and therefore reduce the risk of seizures, intermittent burst protocols have been developed. The most popular of these is

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 53–55, 2010. www.springerlink.com

54

G.F. Wittenberg

theta-burst stimulation (TBS), which involves a 50 Hz interstimulus interval, but only for three stimuli and a time, with these three stimuli then repeated at 5 Hz [8]. Such protocols can produce inhibition if performed continuously for on the order of 20-40 seconds, and facilitation if performed intermittently [9]. However, the main benefit of TBS is the reduction in number of stimuli delivered rather than a comparative benefit in effect size [10]. C. Clinical Uses of rTMS While many uses of rTMS have been considered, the most successful use is in depression, where stimulators are used to increase activity in the dorsolateral prefrontal cortex [11]. This treatment is now approved for treatment of drugresistant depression in the U.S. and Canada.

III. RTMS AS T REATMENT FOR T INNITUS A. Clinical Trials Because rTMS can be used to suppress cortical activity, it was a reasonable choice of treatment to treat tinnitus, in which there is constant activity in the cortex related to the conscious perception of a ringing sounds, without any external sounds. Table 1 lists some of the studies and reviews related to such treatment. As can be seen, there has been a tremendous explosion of publishing related to rTMS in tinnitus, starting in 2003, and mainly being carried out in German, Belgian and British laboratories. This bias may related to the more common clinical use of TMS in those countries. The first treatments for tinnitus involved rTMS to the temporal cortex, with stimulation frequencies ranging from 1 Hz [12] to 10 Hz [13]. The treatment targets have varied, but often include the left temporal cortex. Recent studies have shown surprisingly several month long durability for a treatment that takes only 10 days [14]. While treatment can be guided by neuroimaging that demonstrates areas of overactivity to be suppressed [12, 15] treatment that is based on treating the most frequent locus of overactivity, the left auditory cortex, is also effective [14]. But use of neuroimaging has demonstrated the expected suppression of the abnormal activity after RTMS treatment [16]. Perhaps surprisingly, multiple protocols for stimulation appear to be effective, including 1-25 Hz stimulation, theta burst stimulation, with a variety of stimulation strengths. Because there is no noticeable effect of stimulation over the auditory cortex, stimulation strength is generally referenced to the threshold for stimulation of muscles when the coil is over the motor cortex. B. Practical Issues The TMS coil makes a loud click when pulses of current are passed, and the target of TMS in treatment of tinnitus is the

temporal cortex, which is close to the ear. Therefore control for the effects of auditory stimulation are critical. This has been accomplished by use of a sham coil, but the particular electrical Table 1 Selected Studies of TMS in Tinnitus Treatment Year First Author

Type

2003 2003 2003 2005

Eichhammer P Langguth B Plewnia C Kleinjung T

2006 2006 2006 2006 2006 2006 2006 2007 2007 2007

Folmer RL Langguth B Londero A Richter GT Fregni F Langguth B Londero A Eichhammer P Kleinjung T De Ridder D

2007

Kleinjung T

2007 2007 2007 2007 2007

Langguth B Smith JA Rossi S Plewnia C Plewnia C

2008 2008

Kleinjung T Mennemeier M

2008 2008

Landgrebe M Kleinjung T

2008 2008 2008 2009

Lee SL Khedr EM Langguth B Zazzio M

2009

Arfeller C

2009 2009 2009 2009 2009 2009 2010

Meeus OM Marcondes RA Mobascher A Poreisz C Khedr EM Kleinjung T Frank G

Case Series Single subject Clinical Trial Clinical Trial – Long-term outcome Clinical Trial Mechanistic Study – PET Review Single subject RTMS vs tDCS Methodology Pilot Clinical Trial (French) Mechanistic Study in Normals Review Clinical Trial – burst TMS regimens Clinical Trial – predictors of response Mechanistic Study Pilot Clinical Trial Controlled Clinical Trial Clinical Trial Clinical Trial – Dose-finding with PET guidance Review Single subject – maintenance therapy Clinical Trial Pilot Clinical Trial – 2 rTMS locations Pilot Clinical Trial Clinical Trial – dose finding Clinical Trial – Priming Controlled Clinical Trial + other treatments Controlled Clinical Trial – theta burst rTMS Review Controlled Clinical Trial Review (German) Clinical Trial Clinical Trial Clinical Trial + L-dopa Retrospective

IFMBE Proceedings Vol. 32

Transcranial Magnetic Stimulation as a Tool for Investigating and Treating Tinnitus

and material techniques used to create such a coil vary, and subjects can often detect the sham quality of the stimulation, particularly if they have experienced real TMS. The other practical issue remains of whether rTMS treatments will need to be repeated indefinitely, success of rTMS suggests that a more permanent solution for stimulation of the affected area will also be successful [17]. Another option is transcranial direct current stimulation, which is less technically demanding to apply, causes no auditory input, and appears to be equally effective [18].

IV. CONCLUSIONS RTMS appears to be a promising treatment for tinnitus, and an example in which knowledge of central nervous system activity can be used to design an intervention to treat a disease state.

ACKNOWLEDGMENT Dr. Wittenberg is supported by the Department of Veterans Affairs (Geriatrics Research, Education, and Clinical Center & Rehabilitation Research and Development Program) and Kernan Orthopaedic and Rehabilitation Hospital, Baltimore MD.

REFERENCES 1. Turrigiano GG, Nelson SB (2000) Hebb and homeostasis in neuronal plasticity. Curr Opin Neurobiol 10: 358-364. 2. D'Arsonval MA (1896) Dispositifs pour la mesure des courants alternatifs de toutes fréquences. Comptes Rendues del la Société Biologique (Paris) 2: 450-451. 3. Barker AT, Jalinous R, Freeston IL (1985) Non-invasive magnetic stimulation of human motor cortex. Lancet 1: 1106-1107. 4. Wolters A, Schmidt A, Schramm A, Zeller D, Naumann M, Kunesch E, Benecke R, Reiners K, Classen J (2005) Timing-dependent plasticity in human primary somatosensory cortex. J Physiol 565: 1039-1052. 10.1113/jphysiol.2005.084954 5. Chen R, Classen J, Gerloff C, Celnik P, Wassermann EM, Hallett M, Cohen LG (1997) Depression of motor cortex excitability by lowfrequency transcranial magnetic stimulation. Neurology 48: 13981403. 6. Pascual-Leone A, Houser CM, Reese K, Shotland LI, Grafman J, Sato S, Valls-Sole J, Brasil-Neto JP, Wassermann EM, Cohen LG, et al. (1993) Safety of rapid-rate transcranial magnetic stimulation in normal volunteers. Electroencephalogr Clin Neurophysiol 89: 120130. 7. Chen R, Gerloff C, Classen J, Wassermann EM, Hallett M, Cohen LG (1997) Safety of different inter-train intervals for repetitive transcranial magnetic stimulation and recommendations for safe ranges of stimulation parameters. Electroencephalogr Clin Neurophysiol 105: 415-421.

55

8. Huang YZ, Rothwell JC (2004) The effect of short-duration bursts of high-frequency, low-intensity transcranial magnetic stimulation on the human motor cortex. Clin Neurophysiol 115: 1069-1075. 10.1016/j.clinph.2003.12.026 9. Talelli P, Greenwood RJ, Rothwell JC (2007) Exploring Theta Burst Stimulation as an intervention to improve motor recovery in chronic stroke. Clin Neurophysiol 118: 333-342. 10.1016/j.clinph.2006.10.014 10. Zafar N, Paulus W, Sommer M (2008) Comparative assessment of best conventional with best theta burst repetitive transcranial magnetic stimulation protocols on human motor cortex excitability. Clin Neurophysiol 119: 1393-1399. 10.1016/j.clinph.2008.02.006 11. Pascual-Leone A, Rubio B, Pallardo F, Catala MD (1996) Rapid-rate transcranial magnetic stimulation of left dorsolateral prefrontal cortex in drug-resistant depression. Lancet 348: 233-237. 12. Langguth B, Eichhammer P, Wiegand R, Marienhegen J, Maenner P, Jacob P, Hajak G (2003) Neuronavigated rTMS in a patient with chronic tinnitus. Effects of 4 weeks treatment. Neuroreport 14: 977980. 10.1097/01.wnr.0000068897.39523.41 13. Plewnia C, Bartels M, Gerloff C (2003) Transient suppression of tinnitus by transcranial magnetic stimulation. Ann Neurol 53: 263266. 10.1002/ana.10468 14. Khedr EM, Rothwell JC, El-Atar A (2009) One-year follow up of patients with chronic tinnitus treated with left temporoparietal rTMS. Eur J Neurol 16: 404-408. 10.1111/j.1468-1331.2008.02522.x 15. Kleinjung T, Eichhammer P, Langguth B, Jacob P, Marienhagen J, Hajak G, Wolf SR, Strutz J (2005) Long-term effects of repetitive transcranial magnetic stimulation (rTMS) in patients with chronic tinnitus. Otolaryngol Head Neck Surg 132: 566-569. 10.1016/j.otohns.2004.09.134 16. Smith JA, Mennemeier M, Bartel T, Chelette KC, Kimbrell T, Triggs W, Dornhoffer JL (2007) Repetitive transcranial magnetic stimulation for tinnitus: a pilot study. Laryngoscope 117: 529-534. 10.1097/MLG.0b013e31802f4154 17. De Ridder D, De Mulder G, Walsh V, Muggleton N, Sunaert S, Moller A (2004) Magnetic and electrical stimulation of the auditory cortex for intractable tinnitus. Case report. J Neurosurg 100: 560-564. 10.3171/jns.2004.100.3.0560 18. Fregni F, Marcondes R, Boggio PS, Marcolin MA, Rigonatti SP, Sanchez TG, Nitsche MA, Pascual-Leone A (2006) Transient tinnitus suppression induced by repetitive transcranial magnetic stimulation and transcranial direct current stimulation. Eur J Neurol 13: 9961001. 10.1111/j.1468-1331.2006.01414.x

Use macro [author address] to enter the address of the corresponding author: Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

George F. Wittenberg VAMCHS/GRECC 10 N Greene St., (BT/18/GR) Baltimore USA [emailprotected]

A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen W.C. Wong and E.B. Haase Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD Abstract–– Johns Hopkins University’s Biomedical Engineering (BME) Department Freshmen Modeling and Design course provides a taste of BME by integrating first-order modeling of physiological systems with quantitative experimentation at a level that freshmen students can understand. It is a team-based course from both instructor and student perspectives, combining lectures with practical and project components. The freshmen teams consist of 5-6 students, each mentored by a faculty adviser, and guided by a graduate student teaching assistant and upperclassmen BME lab managers. Projects are completed and graded as a group. To encourage teamwork and participation, a peer evaluation system is employed, in which a student receives a modified grade based on the group's grade and their personal contributions to the project. For a cohort of about 130 freshmen, typically more than 20 faculty members, 12 graduate student teaching assistants and 14-18 BME upperclassmen lab managers are involved, which is a unique aspect of this course. By putting freshmen students into close contact with many members of faculty and student body, this course aims to serve as a springboard for students to explore the diverse BME landscape, as well as foster a greater awareness of the opportunities of student life on campus. Keywords–– biomedical, engineering, education, freshmen, modeling.

I. INTRODUCTION Freshmen college students face a range of decisions, such as which academic discipline to pursue, which laboratory to work in, which social group to associate with and which extracurricular activities to pursue. Each of these decisions may have a profound impact on their future. Freshmen BME majors at Johns Hopkins not only need to decide whether BME is a suitable discipline for them, they also need to choose a focus area such as cell and tissue engineering, systems biology, biomechanics, biomedical sensors and devices, computational modeling and bioinformatics. This mandatory 2-credit freshmen course in modeling and design helps our students explore different fields through a diverse range of lecture topics and projects. Freshmen are in direct contact with faculty members, graduate students and undergraduates working in laboratories throughout the University. The team-based format builds a social network which helps support them through the rest of their college career, regardless of the decisions they make.

The primary goal of this course is to engage our students as active BMEs from their very first day at Johns Hopkins. Emphasis is placed on developing critical thinking, problem solving, interpersonal and leadership skills that are relevant across a wide range of disciplines, rather than on teaching subject specific knowledge that students will acquire in the subsequent years of their college education. The difficulty in challenge-based teaching is that the students have not learned the skills or information they need before they start their projects, a situation encountered in other freshmen biomedical engineering courses [1]. This course has been designed to provide students with enough guidance to successfully transition from high school to college, while also fulfilling a number of ABET criteria (a), (b), (d), (e) and (g) [2]. In addition, some of the independent projects require equipment design (c) or consideration of ethical concerns (f).

II. ORGANIZATION The typical freshmen class for the Johns Hopkins BME department is approximately 130 students per year. Every freshman is required to attend a one-hour lecture once a week. In addition, freshmen are organized into teams of 5-6 students, resulting in about 25 teams in total. Each team is assigned a faculty member and a graduate student Teaching Assistant (TA). Each team undertakes 5 laboratory modules, in which they design their own experimental protocols, perform experiments in lab and write reports collectively as a team. Every year 14-20 upperclassmen laboratory managers provide on-the-spot guidance to students and ensure safety in the laboratory. The lab managers also serve as mentors to the freshmen, giving advice on course selection and extracurricular activities, which has been shown to benefit both the upperclassmen and the freshmen [3]. In total, over 60 people are involved in teaching this freshmen course. Teams meet with their faculty advisers informally on a biweekly basis. The role of the faculty adviser is to introduce students to an aspect of their scientific research and life at Hopkins, as well as to help students prepare for each laboratory module by going through relevant concepts and ideas. Each faculty adviser is given the choice to structure the meeting and discussion in the manner of their preference. Students are given the freedom to design and implement their own protocols. The role of the TA is to ensure that the students are fully prepared for each laboratory module,

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 56–60, 2010. www.springerlink.com

A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen

ensure that protocols are safe and scientific, and to grade the laboratory reports. The course director provides each faculty member with a handbook describing the course projects in detail. The handbook also suggests questions faculty may ask to encourage discussion of a specific project. In addition, the course director meets personally with all of the TAs and upperclassmen laboratory managers prior to each of the five projects. During these meetings, the course director goes through the theory and procedures behind each laboratory exercise. These meetings help ensure students have a relatively uniform learning experience and grading criteria.

III. LECTURES There are usually 14 one-hour lectures during the semester. These lectures are not directly related to the laboratory modules, but are on various pertinent topics such as laboratory safety, engineering design, library resources etc. Since lectures are scheduled according to availability of the presenters, they vary slightly from year to year. Table 1 is a compilation of topics covered over the years. Table 1 Bi-weekly lecture topics 1. Laboratory Safety 2. Engineering Design 3. Introduction to the Library 4. Six Flags Trip: Measuring HR and acceleration 5. Introduction to Physiology 6. Sensors 7. Department Chair Presentation 8. You and the IRB 9. Introduction to Statistics 10. Effective Oral and Poster Presentations 11. Undergraduate Research Day 12. Patents, Licensing and Technology 13. Matlab in a Nutshell 14. Design Team Presentations

IV.

LABORATORY MODULES

Each laboratory module has a different theme, with three experiments aimed at modeling a certain aspect of the human body (human efficiency, static and dynamic arm, and cardiovascular system), an engineering exercise using foam core material and an independent project. Experiments are typically presented to students in an open-ended manner, with an accompanying list of essential facts and equations provided for guidance. Students are expected, under the guidance of their TA and Faculty Advisor, to

57

design their own experimental protocol, which will be reviewed by TAs and approved by Lab Managers before the start of each experiment. A. Model of Human Efficiency The first project models human efficiency using the simple equation: Efficiency = output/input

(1)

Students develop this definition even further for humans by realizing that “output” can be measured through work or exercise. Initially the students tend to guess that “input” is food intake. Further questioning helps them to understand that oxygen consumption is a much more accurate measurement of the energy used by the body during a specific period of time. Students use their model to predict how much oxygen they would need to do a precise amount of work. The students design laboratory experiments to determine human efficiency at rest and during exercise by calculating the work done in a repetitive exercise (output) and measuring oxygen use, and consequently energy consumption (input). Oxygen consumption is computed from measurements of the subject's tidal volume obtained by using the Biopac Data Acquisition system. Students are introduced to the noisiness of biological measurements, and the necessity of making certain assumptions in the acquisition of data, e.g. negligible anaerobic respiration under controlled exercise conditions,. They also learn to make quantitative comparisons to verify possible differences in efficiency due to gender and conditioning (between an athlete and a non-athlete). B. Model of Static and Dynamic Arm The second project estimates the force required in an arm muscle using both static and dynamic models. Using a static model, such as the one sketched in Fig 1, students learn to solve for the force in the deltoid muscle. The maximum possible deltoid muscle force is determined by multiplying the cross-sectional area of the muscle by an average maximum muscle force of approximately 30 N/m2. Through a combination of estimates and measurements, the students may solve for the force in the deltoid muscle that is required to hold a specific load, depicted as Fload in Figure1. Changing the value of the estimated parameters in the models allows the students to determine which variables have the greatest effect on muscle force. The students are usually surprised to discover that the arm length and weight have little effect on the force required by the deltoid muscle. The point and angle of attachment of the deltoid muscle, which varies between males and females are the most important variables in calculating deltoid force.

IFMBE Proceedings Vol. 32

58

W.C. Wong and E.B. Haase

C. Models of the Cardiovascular System

Fig. 1 Diagram of Free Body Diagram of Static Arm Model Fload = weight held, Farm = weight of arm, Fdelt=force of deltoid muscle, Fshoulder=forces in shoulder muscle. Arm is modeled as a cylinder of constant diameter.

Fig. 2 Arm Model Triceps Force-Length Relationship The force-length relationship has been linearized. Muscle force is calculated as the maximum possible muscle force multiplied by a forcelength factor between 0 and 1. A force-length factor of 1 indicates that the muscle is close to its resting length of lo. A factor close to 0 indicates that the muscle is already very contracted or stretched, and cannot generate its maximum force. In the dynamic arm model, the forearm is propelled by the contraction of the triceps muscle. The contraction force of the triceps is modeled as a function dependent on muscle length and contraction velocity. The differential equation of this system is solved numerically using an Excel spreadsheet, such that students are not required to possess any programming background. The estimated force in the triceps is calculated as the maximum force multiplied by two factors valued between 0 and 1.0; a force-length factor and a force-velocity factor. Figure 2 illustrates the forcelength relationship used in the dynamic arm model. The force-velocity relationship is also linearized in this model. The students can change, or even remove, these muscle relationships in their own models.

The third project studies the cardiovascular system through two different approaches: a hydraulic model using tubes and pumps, and an electric model using breadboards, D.C. Current supplies and Ohmic resistors. In the constant pressure hydraulic model, students are introduced to the factors that affect the flow of water through a rigid pipe, such as length and diameter, to derive the flow equation, essentially Pouiseuille’s Law. Students learn to draw analogies between a hydraulic system and the human circulatory system to estimate the effect of a change in diameter of a blood vessel on flow. In the electric model, basic concepts of flow and resistance in series and parallel circuits are introduced. The students use this model to estimate the change in resistance of the body before and during exercise by measuring the change in mean arterial pressure and heart rate. The most enjoyable aspect of studying the cardiovascular system is a field trip to Six Flags Amusem*nt Park. Previous BME Design Teams developed the SHARD (Synchronous Heart rate and Acceleration Recording Device) to simultaneously measure heart rate, using a Polar heart rate monitor, and ride acceleration, using perpendicular accelerometers. Students plan experiments on three rides to determine the correlation between heart rate and acceleration through activation of the baroreceptor reflex. Data is analyzed using a Matlab program. Not only is this trip a great team building exercise, it also provides an opportunity for the students to obtain data outside a traditional laboratory setting. D. Foam Core Project One of our students’ favorite modules is the foam core project, which is held as a competition between all the student teams. Students are required to design two simple machines which transport a ping pong ball across a distance of 3 meters and back. To make the design exercise more challenging, one of the machines must travel with the ball. The teams are given about a week to come up with the design, and 6 hours to construct the devices using only simple materials such as foam core boards, elastic bands and wooden sticks. Students are also required to give a short presentation on their designs and theoretically estimate the amount of time their machines would take to move the ball. The resultant devices are graded based on the total transit time, originality of the design and the degree of automation of the machine. E. Independent Project The final laboratory module is the independent project. Students have 3 weeks to propose a model of a physiological

IFMBE Proceedings Vol. 32

A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen

system and perform a scientific experiment to test their model. The teams present their results at a poster session at the end of the semester judged by various faculty members, TAs and the lab managers. At this point in the semester, the freshmen have learned to appreciate the open-endedness of these engineering problems and enjoy having the freedom design their own project. Some notable past year projects include: 1. Skin surface area in contact with local cold stimulus on one hand affects intensity of thermoregulatory response in the contralateral hand. This project demonstrated that a cold stimulus in one hand led to vasoconstriction in the contralateral hand. The students concluded that neural circuits that regulate homeostasis in response to changes in temperature are bilateral. 2. Where is that racket coming from? Effect of binaural cues on sound source localization. This project modeled blind-folded subjects’ ability to localize sound at different angles with both ears, left plugged, and right plugged. The students concluded that sound localization decreased with either ear plugged, and that the location of sounds from behind the subject were inaccurately predicted whether an ear was plugged or not.

V. GRADING AND ASSESSMENT Since all laboratory reports are submitted as team work, each team is assigned a team grade. However, to encourage individual participation and fair distribution of work load within each group, there is a peer evaluation system in place such that team members grade one another and themselves based on their commitment and contribution to the project. A student who contributes more than their peers to the project, receives a grade above the team grade and vice versa. If all students contributed equally to the project, which is often the case, then each student would receive the same team grade. Upon completion of the course, students complete an online survey to obtain feedback regarding many different aspects of the course. The comments in Table 2 are in response to the value of the independent project.

VI. CONCLUSIONS The range of topics covered during the lectures and five modules provide the freshmen with the requisite information needed to feel confident about their career choice in BME. The team-based format and multi-level teaching style allow the students to develop relationships with peers, faculty, and mentors. The variety of required

59

outcomes; lab reports, oral presentations, projects, and posters, give the students experience presenting their work in many different formats. Through this course, freshmen are exposed to the problem solving and team-building skills that are crucial to a career in BME. Table 2 Anonymous survey results regarding what the students learned from their independent projects. First 12 of the 98 responses ID 1 2 3 4 6 7

8 9 10 11 12

Response A basic understanding of how the body can be recreated using free diagrams and simple models I think i really learned the concept of modeling. I'm glad we had this project and feel good finishing it. I feel I had learned many things about how biomedical engineers model something in simple way. I learned from each project about different topics which gave me a more generalized knowledge of the biomedical field. I feel I have a bit more of an idea of what is involved in biomedical engineering, at least the design aspect of it. I was exposed to the various fields of BME, and I was introduced to basic modeling. With the independent project, it was really great to be able to design our own experiment, and I learned how to put together a research poster, which I've never done before. The judges were also really helpful in telling us what to do next time I feel that I learned a great deal about the importance of modeling in the engineering process.

Teamwork, designing/building skills more insight on what biomedical engineering field is like A greater understanding of how engineers work together. I felt that one of the most important things I got out of this was how to work with a group on a collegiate level.

ACKNOWLEDGMENTS We would like to thank freshmen teams (12 and 16) for the projects described in this paper, specifically, R. Chang, J. Fang, P. He, B. Ha, R. Romano, G. Wang, P. Adstamongkonkul, D. Dorfman, A. Harwell, C. Kemper, L. Wu, W. Zhong, And B. Chapman, J. Jung, E. Kim, A. Mateen, K. Takach (17) for the sketch of their foam core project. We would also like to acknowledge the work of Dr. Robert Susil on the dynamic arm model.

REFERENCES 1. De Jongh Curry, A. L., Eckstein, E.C. (2005) Gait-model for freshmen level introductory course in biomedical engineering. Proc. of the 2005 Am. Soc. for Engineering Education Annual Conference and Exposition 2. Accreditation Board for Engineering and Technology (ABET) 2008. Criteria for accrediting engineering programs effective for the evaluations during the 2009-2010 accreditation cycle, Baltimore 3. Patel, K.V., DeMarco, R., Foulds, R. (2002) Integrating biomedical engineering design into the freshmen curriculum. IEEE Xplore pp143144

IFMBE Proceedings Vol. 32

60

W.C. Wong and E.B. Haase Author References and Contacts Wing Chung Wong, 1 E University Pkwy Apt 603, Baltimore, MD 21218 [emailprotected] Eileen Haase Dept of BME - Clark Hall 318 Johns Hopkins University 3400 N Charles Street Baltimore, MD 21218 [emailprotected]

IFMBE Proceedings Vol. 32

Classroom Nuclear Magnetic Resonance System C.L. Zimmerman1, E.S. Boyden2, and S.C. Wasserman2 1

Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology 2 Department of Biological Engineering, Massachusetts Institute of Technology

Abstract— A low-field classroom NMR system was developed that will enable hands-on learning of NMR and MRI concepts in a Biological Engineering laboratory course. A permanent magnet system was built to produce a static field of B0 = 0.133 Tesla. A single coil is used in a resonant probe circuit for both transmitting the excitation pulses and detecting the NMR signal. An FPGA is used to produce the excitation pulses and process the received NMR signals. This research has led to the ability to observe Nuclear Magnetic Resonance. Relaxation time constants of glycerin samples can easily be measured. Future work will allow further MRI exploration by incorporating gradient magnetic field coils.

I. INTRODUCTION The primary motivation for this research is the need for students to understand the principles of NMR which is the basis for its medical imaging application, MRI. As a relatively new imaging technique there is an abundance of active research in the MRI field, which continues to contribute to the already broad range of MRI functionality. Because of the ubiquity of NMR as the basic principle behind MRI and other technologies, it is an important topic of learning for science and engineering students, particularly in bioengineering. The current NMR system presented here achieves pulsed NMR capabilities and time domain observation. The system will be used in an MIT laboratory course, Biological Instrumentation and Measurement. Further work may extend the system presented here to MRI. A. Past Work Based on the scientific importance of NMR, it is no surprise that there are many sources of work that are relevant to this project. For example, [1] describes a small desktop MR system developed using permanent magnets and inexpensive RF integrated circuits at the Magnetic Resonance Systems Lab at Texas A&M. The C-shaped magnetic setup had a static field of 0.21T and an imaging region of 2cm. The most relevant past work for this project is a NMR system developed at MIT for an undergraduate physics lab [2]. This NMR system allows students to do pulsed NMR experiments, and has a static field strength of 0.17T.

A laboratory module similar to the system at the MIT undergraduate physics lab was developed at Northwestern University [3]. However, the system incorporates gradient coils to allow for spatial encoding demonstration. This research inspired some of the magnetic system development in our project. B. Background Atomic nuclei that have a “spin” have an intrinsic magnetic moment. NMR technology is based on the relationship between the magnetic moments of atomic nuclei and external magnetic fields and the ability to observe that interaction. NMR experiments can be thought of as having two stages: excitation and acquisition. The critical components of NMR are: a large static hom*ogeneous magnetic field (B0), an oscillating excitation field (B1) which is perpendicular to B0, and a coil to measure the precession of the spins (this may be the same coil that was used to generate B1). In the presence of B0, the nuclear spins align with the field (which is generally in the z direction). During excitation, the nuclear spins are perturbed from alignment. This is done by applying B1, magnetic field pulses at the Larmor frequency. B1 rotates the sample’s magnetization vector, M, creating a “transverse” magnetization component. The amount by which M is rotated is referred to as the tip angle, θ, ands depends on the duration and amplitude of B1. When the spins are perturbed from alignment, they exhibit precessional motion. The frequency of precession is referred to as the Larmor frequency, and is linearly dependent on the field strength of B0. The second stage of pulsed NMR involves observing the precession of the spins. The orientation of M can be measured by the interaction of the magnetization with a receive coil. A changing magnetic field in a coil (produced by the precessing magnetic moments) induces an electromotive force. This corresponds to a voltage that may be observed.

II. SYSTEM DESIGN A. System Overview Figure 1 shows a block diagram of the overall system design. An FPGA development kit (Altera Cyclone III) with

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 61–64, 2010. www.springerlink.com

62

C.L. Zimmerman, E.S. Boyden, and S.C. Wasserman

A/D and D/A converters is used to create pulses and process the received signal. B0 was created using a permanent magnet circuit (see section III). The FPGA is used to create RF pulses, which are then amplified by a power amplifier (Minicircuits ZHL-3A). The output of the power amplifier is connected to the probe circuit, where these pulses generate B1, the excitation field. After excitation, the voltage generated across the receive coil is amplified by low-noise pre-amps (Minicircuits ZFL500LN). The amplified signal is then sent to the FPGA where it is down-modulated and low-passed filtered. While the original NMR signal is in the MHz range, mixing allows us to shift the frequency down and eases low-pass filtering of the signal. The down-modulated NMR signal can then clearly be observed on a computer or oscilloscope.

Fig. 1 Block diagram of System Design B. Transmit Chain Verilog code was developed to produce the RF pulses with the FPGA, the Verilog modules are shown in fig. 2. The “ConfigurationRegisters” module is used to set the pulse parameters, such as pulse width, spacing, and frequency. While the program is running, the user can use buttons and switches on the development kit to adjust these parameters.

Fig. 3 Block diagram of the receive chain (processes the NMR signal) D. Isolation The system’s probe circuit serves two purposes. It transduces the power amplifier signal into the magnetic excitation field, and transduces the NMR magnetic field into the received electrical signal. A single solenoid in the probe circuit is used for both purposes, thus the system must be designed carefully to isolate the transmit and receive signal chains. Isolation prevents the transmitted pulses from damaging the pre-amps, and helps to reduce noise during the NMR signal observation [4]. Crossed diode pairs were used for isolation (shown in fig. 4). There is one set of diodes in series after the power amplifier and one set of shunt diodes before the pre-amps. The received NMR signal will be less than 0.6 volts in amplitude (the conducting voltage for the diodes), and the amplitude of the RF pulses is 10-20 volts. Therefore when the pulses are being transmitted, all of the diodes will be conducting. The result is the following: the series diodes connected to the power amplifier conduct, and the shunt diodes connect the pre-amp input to ground. Therefore, the large RF pulses can generate the excitation field, without damaging the pre-amplifiers. We will want to observe the NMR signal after the pulses. During this time, none of the diodes are conducting because the NMR signal is too small. Therefore, there is no conducting path to the power amplifier or to ground through the shunt diodes. This technique was used in [3] and [2], and described in [4].

Fig. 2 Block diagram of the transmit chain (produces B1) C. Receive Chain Figure 3 shows a block diagram of the overall receive chain and implemented Verilog modules. After being amplified by cascaded pre-amps, the received NMR signal is input to an A/D converter on the development kit. The signal is down-modulated (by the frequency mixer) and filtered by the FPGA before being observed.

Fig. 4 Isolation is provided by crossed diode pairs

III. MAGNETIC CIRCUIT DESIGN The purpose of the magnetic circuit is to create B0, the static magnetic field of the NMR system. The field needs to

IFMBE Proceedings Vol. 32

Classroom Nuclear Magnetic Resonance System

63

be hom*ogeneous because the Larmor frequency is dependent on the field strength. Although the idea of using an electromagnet to create B0 was considered, it was decided that use of permanent magnets was a better choice for this classroom application. To create a hom*ogeneous field with permanent magnets, it was necessary to create a closed magnetic circuit. Finite-Element modeling software was used to simulate the magnetic circuit designs (Comsol and Quickfield). This final magnetic circuit is illustrated in figure 5. The field is created by 3" diameter cylindrical NdFeB magnets. The magnetic field is guided by a welded rectangular yoke made out of low-carbon steel (SAE 1018). Cylindrical spacers and slanted pole pieces were used. The most accurate indication of the field strength is the frequency of resulting NMR signal, which was found to be fo = 5.668 MHz. This corresponds to a field strength of B0 = 0.133T.

The design of the probe circuit is a series LC tank circuit, consisting of L, the coil inductance, and Ct, the tuning capacitor. It is necessary to match the input impedance, Zin, of the probe circuit to the other system components to achieve maximum power transfer and SNR. A matching capacitor, Cm, is added in parallel so that the total input impedance at the resonant frequency may be 50Ω.

Fig. 6 Resonant circuit with tuning and matching capacitors With fixed values of ω (the Larmor frequency) and L, the values of Ct and Cm were calculated so that at the resonant frequency, Zin=50Ω (the imaginary part of the input impedance must be zero for resonance): Ct =

1

(1)

Lω 2 − ω 50 R − R 2

Cm =

1 R 2 − (ωL − c ω1 2 ) 2

(2)

t

(a)

Table 1 Probe Circuit Values

(b)

Fig. 5 (a) simulated magnetic field lines (b) photograph of the system

Property

Value

Larmor Frequency

5.668 MHz

Coil Inductance @5.668MHz

IV. PROBE CIRCUIT The NMR system uses a single coil (a solenoid) as both a transmitter and receiver. The coil creates a magnetic field by driving it with a current (Ampere's law). The coil can also detect the NMR signal because the precessing spins generate a voltage across the coil (Faraday’s law). A resonant circuit is generally used to detect the NMR signals because it only allows the detection of a narrow frequency band, which can be tuned to the Larmor frequency of the system (this frequency specificity increases SNR). The resonant circuit design is an LC tank circuit, in which the transmit/receive coil serve as the inductor. The solenoid was designed with physical and electrical constraints in mind. It was wound so that an NMR test tube could fit snuggly in the coil, and so that the inductance was a reasonable value for the design. AWG 20 wire was tightly wound around a test tube, and then epoxy was used to hold it together. Properties of the coil are shown in table 1. Note that the inductance and resistance of the coil were measured at the intended operating frequency. This is especially important for the resistance measurement because at high frequencies skin effect in the wire becomes significant, effectively increasing the resistance.

3.443 µH

Coil Resistance @5.668MHz

2.7 Ω

Tuning Capacitor (Ct)

250pF

Matching Capacitor (Cm)

2100pF

Table 1 shows the calculated capacitance values of the probe circuit, and figure 7 shows simulations of the probe circuit done with LTspice.

(a)

(b)

Fig. 7 (a) This plot is a result of an ‘ac analysis’ simulation.

The resonant peak corresponds exactly to the desired frequency. (b) This is the result of a ‘transient simulation’. This simulation demonstrates impedance matching. The input voltage source is 10V and a has source impedance of 50Ω. If the input impedance of the probe circuit is 50Ω then the voltage at the input should be half of the source amplitude

IFMBE Proceedings Vol. 32

64

C.L. Zimmerman, E.S. Boyden, and S.C. Wasserman

Trimmer capacitors were used with a range of 12pF 120pF in addition to larger ceramic capacitors for Ct and Cm. There are several possible sources of parasitic capacitance in the probe assembly that can affect the behavior of the resonant circuit. There may also be slight variation in the desired resonant frequency due to temperature drift of the magnets or the location of the probe in the magnetic field. Having adjustable capacitors lets us account for all of these things by allowing a 100pF range of adjustment. When the entire NMR system is implemented, these capacitors can be adjusted until the maximum NMR signal is achieved.

inversion-recovery sequence was used to observe longitudinal relaxation, which is of the form Mz = M0(1 - 2e-t/T1). The acquired data and curve fits are shown in figure 10. T2 was determined to be 10.8ms and T1 was determined to be 15.8ms.

(a)

V. RESULTS The principle result of this research was the demonstration of received NMR signals with sufficient SNR. Figure 8 shows oscilloscope shots of observed NMR signals.

(b)

Fig. 10 (a) This is a plot of the data acquired using the spin-echo sequence. The amplitude of the echo was measured to give an accurate indication of the amplitude of the transverse magnetization. (b) This is a plot of the data acquired using the inversion-recovery sequence. A two pulse sequence is necessary to observe the amplitude of the longitudinal relaxation

VI. CONCLUSION AND FUTURE WORK

(a)

(b)

Fig. 8 Oscilloscope shots of an FID curve and an Echo signal In order to execute pulse sequences to measure time constants, we must first determine the pulse duration that corresponds to 90o and 180o tip angles. The mapping of pulse width to tip angle is shown in figure 9.

A functional low-field pulsed nuclear magnetic resonance (NMR) system for bench top undergraduate laboratory studies was demonstrated. The developed system will be a useful NMR learning tool for students. Fundamental NMR concepts that are difficult to visualize can be easily demonstrated with this system. The use of the FPGA to produce pulses and process the received NMR signal provides broad flexibility. The FPGA can be used to produce complex pulse sequences, and will also be used to drive gradient coils in the future development of this system. The addition of gradient coils is currently in development, and will allow positional detection of the sample. The future goal of this research is to develop the NMR system into a classroom MRI system.

REFERENCES Fig. 9 This plot represents the magnetization vector, M, being rotated by the excitation field, B1. The x-axis is the pulse width but also represents the tip angle, θ, of M. The y-axis is the FID amplitude resulting from a pulse After the durations of the 90o and 180o pulses were determined, we were able to execute pulse sequences used to measure the time constants, T1 and T2. The pulse sequences used were: 90-180 (spin-echo) and 180-90 (inversion-recovery). The spin-echo sequence was used to observe transverse relaxation, which is of the form Mxy(t) = M0e-t/T2. The

1. Wright S, Brown D, Porter J et al (2002) A desktop magnetic resonance system. Magnetic Resonance Materials in Physics, Biology and Medicine. Vol. 13, pp 177-185. 2. Kirsch J, Newman R, A pulse NMR experiment for an undergraduate physics laboratory. http://web.mit.edu/8.13/www/JLExperiments/JLExp_12AJP.pdf 3. Hayes C, Sahakian A, Yalvac B (2005) An inexpensive laboratory module to teach principles of NMR/MRI. Proceeding of the 2005 ASEE Conference. 4. f*ckushima E, Roeder S (1981) Experimental Pulse NMR. AddisonWesley Publishing Company, Inc., 1981.

IFMBE Proceedings Vol. 32

The Basics of Bioengineering Education Arthur T. Johnson Fischell Department of Bioengineering, College Park, MD 20742 Abstract— Bioengineering education often tends towards applied biological science. However, engineering is a profession different from the discipline of biological science. This difference should be maintained in undergraduate bioengineering education. A curriculum based upon fundamentals of engineering, science, math, and liberal studies can give students the flexibility they need to master the challenges of future employment. Keywords— teaching for success, education, undergraduate curriculum, modern biology, curriculum content.

I. INTRODUCTION Bioengineering can easily be confused with applied biology, but education for bioengineering students also needs to include engineering fundamentals. Engineering differs from science in that it results in satisfactory products and processes through creative activities. Bioengineering education must emphasize both the science and engineering sides of its roots. There is a tendency today for biological science, in particular, to focus on lower hierarchical levels. This reductionism becomes reflected in bioengineering subjects taught to undergraduates. Yet, there is still need for bioengineers to manage production processes, understand package sterilization, and design instruments for medical use. These talents must be developed through exposure to course topics dealing with all biological levels and systems in general. Bioengineering education should aim to produce biological engineers rather than applied biological scientists. Educational experiences for bioengineering students should stress fundamentals, analogical methods, and broad range of applications. The sciences of physics, chemistry, biology, and engineering science (especially controls, information transfer, and strengths of materials) need to be included. Calculus, differential equations, and mathematical modeling techniques are very important. Engineers, in particular, draw many of their conclusions from model results in a deductive fashion (as opposed to biologists who, like most scientists, draw general principles from accumulate facts by the process of induction). For engineers to develop models correctly, they must be given solid concepts of the ways things work, and then be able to translate these concepts into (usually) mathematical form. Mathematical

manipulative abilities are unfortunately sadly lacking in many of our bioengineering undergraduates today. Students should learn about terms and nomenclature, which are often quite different for sundry applications areas and are needed to communicate with specialists in the different fields. Nomenclature, especially in biology, changes rapidly. Despite the emphasis on nomenclature, bioengineering education should emphasize general principles and possible applications rather than fact memorization. Modern biology emphasizes four approaches: 1) inheritance and information legacy, 2) developmental and ecological explanations, 3) phenotypic plasticity and biodiversity, and 4) relationships within interactive networks. Each of these is conducive to bioengineering understanding and application. Biological responses are very dependent upon the surrounding physical, chemical, and biological environment. Modern epigenetics indicates that environments have more effect than previously believed. It has always been known that living beings respond to environmental cues, but it is becoming clearer now that environment can change the genetic legacy of living things. New paradigms for teaching information legacies need to include these epigenetic effects as well as the cultural legacies known as memes. Creative experiences in an engineering context are called designs, and bioengineering educational programs should not skimp on design projects. Especially if design projects are combined with group projects and have an identifiable communications component, bioengineers will learn the essence of engineering involving biological systems. These are important problem-solving skills necessary for the successful practice of bioengineering. It is this ability to use logic to solve difficult problems that is fundamental to the practice of engineering. All bioengineering students will not further their educations in graduate school or professional schools. Undergraduate bioengineering education should serve those who seek employment immediately after the bachelor’s degree, as well as those who plan to continue their formal educations. Hence, undergraduate bioengineering education should include additional skills, such as economics, business management, cultural awareness, and communications skills. The undergraduate experience should expose students to generalities and fundamental thinking. Undergraduates should not become too specialized in either the knowledge

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 65–66, 2010. www.springerlink.com

66

A.T. Johnson

that they possess or the methods they use. The more versatile they are when they graduate, the more likely it is that they will be successful in their professional endeavors. This also includes the ability to work with non-human biological systems, if need be. Lastly, bioengineering education needs to be flexible. Although progress in biology has largely moved from fundamentals to application details, there is still enough new information added to biological science that bioengineering curricula must be able to accommodate major changes every few years. Breakthroughs in physics, mathematics, engineering science, and chemistry are much less likely in the foreseeable future than are major breakthroughs in biology. This means that courses may have to be added, some dropped, and many changed to keep up with the field. More efficient ways must be used to package information in order to deliver necessary knowledge efficiently to students. The ways that courses were taught a decade before to present faculty may not be appropriate for the students they teach today. It would certainly be a mistake to assume that bioengineering education can be completed in four years. Whether

they continue in their formal education or not, bioengineering graduates will necessarily gain knowledge as they pursue their careers. Whereas it is impossible to include each of the aforementioned fundamentals in a four year curriculum to completely cover all of them, nevertheless it is possible to expose students to them in various ways in a coordinated curriculum. In this way, courses to meet curriculum requirements cannot be considered to be independent of one another. There should be communications among instructors to make them cognizant of the overall goals of the program and each instructor’s part toward achieving those goals. It was stated in a recent article in the ASEE Prism (Lord, M., 2010, Not What Students Need, Prism 19(5):44-46) that “there’s a pretty big gap between what engineers do in practice and what we think we’re preparing them for”. Because the world of engineering practice is, and will continue to be, dynamic, we need to assure that our graduates are versatile, good communicators, sound in technical fundamentals, and specialists in technical diversity. Building upon that foundation will lead to success in their careers.

IFMBE Proceedings Vol. 32

HealthiManage: An Individualized Prediction Algorithm for Type 2 Diabetes Chronic Disease Control Salim Chemlal1, Sheri Colberg1, Marta Satin-Smith2, Eric Gyuricsko2, Tom Hubbard2, Mark W. Scerbo1, and Frederic D. McKenzie1 1

2

Old Dominion University, Norfolk, VA, USA Eastern Virginia Medical School, Norfolk, VA, USA

Abstract— This paper describes a prediction algorithm for blood glucose in Type 2 diabetes. An iPhone application was developed that allows patients to record their daily blood glucose levels and provide them with relevant feedback using the prediction algorithm to help control their blood glucose levels. Several methods using theoretical functions were tested to select the most accurate prediction method. The prediction is adjusted with each glucose reading input by the patient taking into consideration the time of the glucose reading and the time after the patient's last meal, as well as any physical activity. The individualized prediction algorithm was tested and verified with real patient data and also validated using a non-parametric regression method. The accuracy of prediction results varied from different approaches and was adequate for most of the methods tested. The predicted results merged closer to the patients’ actual glucose readings after each additional input reading. The findings of the research were encouraging and the predictive system provided what we believe to be helpful feedback to control, improve, and take proactive measures to regulate blood glucose levels. Keywords— Prediction, blood glucose, diabetes, exercise.

I. INTRODUCTION There are over 23.6 million children and adults in the United States with diabetes[1]. Type 2 diabetes is the most common form of this chronic disease and is now one of the most rapidly growing forms of diabetes. Type 2 diabetes occurs when the body does not produce enough insulin or loses its ability to efficiently use insulin, which results in glucose build up in the blood instead of into body cells. Uncontrolled diabetes is the leading cause of kidney failure and is directly responsible for harming blood vessels leading to early heart attacks, stroke, blindness, and a need for amputations. A management strategy for diabetes is keeping blood sugar in a close-to-normal range preventing any unsafe glucose levels. Our objective is helping Type 2 diabetic patients monitor and control their glucose level based on daily feedback of glucose regulation and compliance. A prediction algorithm has been developed to provide the patient with feedback about ongoing glucose management

based on a predicted model. We also developed an iPhone application that allows patients to record their daily blood glucose levels and provide relevant feedback using the prediction algorithm. Several methods involving functions, such as regression, power series, and exponentials functions, were considered and tested to select the most appropriate and accurate prediction method. In all methods, the prediction is adjusted with each new glucose reading input by the patient. The prediction also takes into account physical activity considering the fact that even mild exercise may have a significant effect on blood glucose variation; the duration and intensity of exercise are the key factors that contribute to the effect of an activity on glucose level. The individualized prediction algorithm was tested and verified with realistic patient data. The accuracy of prediction results varied from the different approaches and was adequate enough for most methods; however, the last prediction involving exponential functions was the most accurate. As expected, the predicted results merged closer to the patients’ actual glucose readings after each additional input. The results of our selected prediction method were also validated using a non-parametric regression method. An interactive iPhone application was designed to provide patients with valuable feedback based on the prediction model to help track and control their blood glucose levels. Numerous efforts were conducted concerning blood glucose prediction, but they were intended for Type 1 diabetic patients or based on continuous glucose monitoring for short-term predictions using data-driven auto-regressive (AR) models [2,3] or simple regression models [4]. Some of the glucose prediction studies for Type 1 included medication dosing decision support and GUI interface along with the predictive model [5,6]. Such a case was not previously conducted for Type 2 diabetes.

II. MATERIALS AND METHODS A. Prediction Strategies Our objective was to develop a glucose prediction algorithm that can provide patients with valuable feedback

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 67–70, 2010. www.springerlink.com

68

S. Chemlal et al.

based on comparing predicted values with the actual reading. Several methods using different theoretical functions were considered. The first method involved fitting a typical Type 2 diabetic patient blood glucose level curve over a 24 hour period using a high order polynomial. The polynomial would change over time based on the patient's blood glucose readings input. Since such readings are typically collected once or twice a day, a close fit to an individual patient may only be accurate after several weeks of use and adaptation. This method is based on a least squares curve fitting technique and was applied to the initial idealized glucose values and readings collected each day from the patient. The idealized glucose values were obtained from a combination of sources and need not be representative of a particular individual but closer initial values would obviously take less time to converge to accurate predictions. For this method, it is assumed that the best-fit curve has the minimal sum of the deviations squared from our dataset, n

min = ∑[( yi − (a0 + a1xi + a2 xi2 + ... + am xim ))2 ] i =1

(1)

where a 0 , a 1 , a 2 , ..., a m are the polynomial coefficients, x i is the glucose reading time, and y i is the glucose reading. As a second method, the 24-hour prediction period was split into to three smaller periods based on the glucose behavior after meals; the three periods were represented as, before and after breakfast, before and after lunch, and before and after dinner. After a meal, the glucose level increases immediately reaching a peak within approximately 45min. After this peak, levels fall dramatically, almost as quickly as they rose. When glucose drops back down to the normal range that it was in before the meal, which occurs usually within 2 hours after food consumption, it keeps gradually decreasing. Therefore, Lognormal and Weibull functions were considered to represent this last behavior. In this method, Weibull functions were represented as follows,

⎛ β ⎞⎛ β ⎞ f (T ) = ⎜ ⎟ ⎜ ⎟ ⎝ η ⎠⎝ η ⎠

β−1

e

⎛T ⎞ ⎜⎜ η ⎟⎟ ⎝ ⎠

β

(2)

where η and β are the scale and shape parameters, respectively. The three periods of the curve of the prediction have the same Weibull function, but with different parameters based on the initial idealized glucose curve and any added glucose input readings. The third method was based on the observations and issues encountered applying the previous methods making this method seemingly more logical, and reliable. In this method, the 24-hour prediction period is also partitioned to three periods represented by, after breakfast and before lunch, after lunch and before dinner, and after dinner and before breakfast; they are characterized with same

functions, but different parameters. This prediction method utilizes the idealized glucose values as a starting graph as well and updates the functions parameters after every added input reading. Any food intake causes the glucose level to increase immediately, but our main concern of the prediction is not concerning the increase or peak glucose level after food intake, but the drop and change following a meal and primarily before the next meal. In this method, the drop of glucose after a meal, which starts within 45 minutes after the meal, is assumed to follow an exponential distribution. Also, when the glucose level drops back down to near regular levels for an individual patient, it is assumed that it follows a linear function that is gradually decreasing with time. The exponential function is expressed as the following,

y = a + be-x

(3) where a and b are the location and rate parameters. It was important to assume a meal time for breakfast, lunch, and dinner in all the above methods. The assumed meal time gets updated with respect to any input reading that is known to be before or after meal; this is again individualized for each particular patient. For instance, when the assumed meal time increases or decreases by a Δt, the prediction curve would be shifted left or right. By adjusting the assumed meal time, a better accuracy can be achieved especially if the person is consistent about meal times. B. Management and Compliance Feedback Since the goal of the prediction model is to provide reasonably accurate and helpful feedback to the patient, it is critical to have a well developed and meaningful response mechanism. The flow chart below represents the feedback to be displayed to the patient by comparing the actual input reading (CG) to the predicted reading (PG) at a certain time. The feedback process was developed with the help of pediatric endocrinologists on our team. The feedback process has two main parts based on the glucose reading input time, before or after meal. "Before the meal" readings are past the after-meal glucose peak, which is within an hour after finishing a meal. The "after the meal" readings are within an hour of finishing the meal. Since this is not a recommended time to take readings, most of the feedback based on the predicted glucose levels is on the before meal side. If the patient's input reading is before the meal, it is compared to the predicted reading from the predictive model as well as to other variables. The variables on the flow chart are given initial values such as 60mg/dL, but may be changed by the patient's physician. For the after meal path, since the rate of blood glucose increase depends highly on the glycemic index of the meal, we are more interested on predicting the individual patients usual drop and change following a meal and before the next meal.

IFMBE Proceedings Vol. 32

HealthiManage: An Individualized Prediction Algorithm for Type 2 Diabetes Chronic Disease Control

69

carbohydrate during exercise can vary enormously and depends strongly on exercise intensity. In the case of physical activity, the predictive system adjusts the predictions based on the duration and intensity of exercise.

III. RESULTS

Fig. 1 Feedback Process Flow Chart C. GUI Interface The predictive model along with the feedback process were implemented into an iPhone application for simple and daily use by the patients. When a patient inputs the current glucose reading, the application provides feedback about of their glucose level. The patient may also input the time after last meal with a scroll menu for a more accurate prediction; otherwise, the prediction model uses an assumed last meal time based on the previous collected data. The feedback responses are implemented as alert windows, which pop up instantly after saving the current glucose reading. The iPhone application allows an easy everyday use of the predictive system; the valuable feedback provided can help patients control their glucose regulation and keep it as close as possible to the range specified by their doctors. D. Considerations The prediction algorithm also takes into consideration any physical activity. The relative utilization of fat and

The methods utilized in our prediction algorithm were tested with realistic representative data provided by Children's Hospital of The King's Daughters; observations were made based on the long-term individual behavior and percentage error between the predicted and actual readings. The first method using a high order polynomial represented a good curve of the typical graph, but was too computationally complex. The polynomial curve was composed of 48 data points recorded every 30min from the typical graph, which required a number of high order coefficients for a good representation. Therefore, replacing or averaging the input reading with the predicted one at that time would still not change the shape much to adjust to further readings. From the observations and results of the first method, we chose to split the graph to three periods and implement a prediction method for each one separately. The method based on Weibull functions represented well the rise and drop of glucose after each meal; however, it did not handle the period after glucose drops back down to the normal range, which has a slower decreasing rate. The third method, which was based on an exponential function, produced results that are accurate enough to drive the feedback process. We started by dividing the 24-hour curve to three parts based on the major meal, as in method two; however, on this method, we were not concerned much about the immediate rise of glucose after meal. We were more concerned with predicting the glucose drop and behavior after the meal and primarily before the next meal. In this method, if a reading is recorded before a meal, dinner for instance, it would not change the shape of the exponential function used for dinner, but it would adjust the assumed meal time. In other words, the curve would be shifted closer to an assumed meal time, which would allow the application to estimate a usual meal time of an individual even if not entered for a certain day. This sample reading recorded before dinner would also be considered as an after meal for lunch; feedback is then given by comparing the prediction with the actual reading in the after lunch period. The sample reading is then added to the prediction model for next time. Hence, readings either shift the glucose curve up or down or they change the exponential drop by changing its parameters.

IFMBE Proceedings Vol. 32

70

S. Chemlal et al.

Fig. 2 Blood Glucose Prediction of Actual Patient Data The above figure shows a typical Type 2 diabetic patient graph with idealized values (top red curve), samples of real patient readings before and after dinner (blue stars), a final patient prediction graph after 3 weeks of representative data (bottom black curve), and the two actual readings of the day following the prediction (blue squares). After adding the realistic patients' data to the typical curve, represented by stars in the figure, the prediction graph slightly changed the assumed meal times and the exponential functions parameters. For instance, the typical graph had 19:00 as a starting dinner time; however, the patient readings were recorded right before dinner at an earlier time. Therefore, the prediction of dinner slightly shifted its assumed meal time from 19:00 to an earlier time; the final assumed dinner time after 3 weeks of readings was 18:30. The model was also validated using a non-parametric regression method. The predictive system was implemented on the iPhone as a user-friendly application making interaction with patients simple and easy. Reliable and helpful feedback is provided instantly by inputting any new glucose readings. For instance, when the patient's current reading is too low, below 60mg/dL, the system recommends the rule of 15 (Fig. 3), which is consume 15 grams of carbohydrates, wait about 15 minutes, then recheck the glucose level.

IV. CONCLUSION In this paper we presented a prediction algorithm using different methods to predict blood glucose regulation for Type 2 diabetic patients. Feedback responses are then provided based on a comparison between the predicted and actual readings at the time of the reading. The predictive system will also take into consideration any physical activity based on the total exercise duration and intensity.

Fig. 3 iPhone Application Feedback based on the Prediction Model The methods were tested and verified with realistic representative data and the performance was assessed by considering the convergence of typical data to the representative data and the percentage error. The findings of this research were encouraging and the predictive system provided what we believe to be helpful feedback to control, improve, and take proactive measures to regulate blood glucose levels. The next step would be to test the system by utilizing a statistically significant set of actual patient data.

REFERENCES [1] "Diabetes Statistics," American Diabetes Association, April 2007, http://www.diabetes.org/diabetes-basics/diabetes-statistics/ [2] T. Bremer and D. Gough, “Is blood glucose predictable from previous values? A solicitation for data,” Diabetes, vol. 48, pp. 445–451, 1999. [3] Gani A, Gribok AV, Rajaraman S, Ward WK, Reifman J. Predicting subcutaneous glucose concentration in humans: data-driven glucose modeling. IEEE Trans Biomed Eng. In Press (Feb 2009). [4] Sparacino G, Zanderigo F, Corazza S, Maran A, Facchinetti A, Cobelli C. Glucose concentration can be predicted ahead in time from continuous glucose monitoring sensor time-series. IEEE Trans Biomed Eng. 2007 May;54(5):931–937. [5] Albisser AM (2005) A graphical user interface for diabetes management that integrates glucose prediction and decision support. Diabetes Technol Ther 7:264–273 [6] Albisser AM, Baidal D, Alejandro R, Ricordi C: Home blood glucose prediction, clinical feasibility and validation in islet cell transplant candidates. Diabetologia 2005, in press. Salim Chemlal, Old Dominion Univeristy Norfolk, USA E-mail: [emailprotected]

IFMBE Proceedings Vol. 32

Dynamic Movement and Property Changes in Live Mesangial Cells by Stimuli Gi Ja Lee1,2, Samjin Choi1,2, Jeong Hoon Park1,2, Kyung Sook Kim1,2, Ilsung Cho1,2, Sang Ho Lee3, and Hun Kuk Park1,2,* 1

Department of Biomedical Engineering, College of Medicine, Kyung Hee University 2 Healthcare Industry Research Institute, Kyung Hee University 3 Dept. of Nephrology, College of Medicine, Kyung Hee University, Seoul 130-701, Korea Abstract— Atomic force microscopy (AFM) has become an important device to visualize various cells and biological materials for non-invasive imaging. The major advantage of AFM compared to the conventional optical and electron microscopes is its convenience. Sample preparation for AFM does not need special coating or vacuum as a procedure. AFM can detect samples even under the aqueous condition. Although the AFM is originally used to obtain surface topography of sample, it can measure precisely the interactions between its probe tip and the sample surface from force-distance measurements. Glomerular mesangial cells (MC) occupied central position in the glomerulus. It is known that MC can control not only glomerular filtration, but also cell response to local injury including cell proliferation and basem*nt membrane remodeling. It was reported the increment of angiotensin II by activation of rennin angiotensin aldosterone system (RASS) caused abnormal function of MC. In this study, we observed structural and mechanical changes to MC after Ang II treatment using AFM. Real time imaging of live cell suggested dynamical movement of cells was stimulated by angiotensin II injection. Simultaneously, the changes of stiffness and adhesion force of MC by angiotensin II and angiotension II inhibitor (telmisartan) was revealed by using force-distance curve measurement. Keywords— AFM, Mesangial cell, Real-time imaging, forcedistance analysis, RAAS.

I. INTRODUCTION Atomic force microscopy (AFM) has become an important tool for non-invasive imaging of various cells and biological materials since its invention in 1986 by Binnig et al [1]. The major advantages of AFM over conventional optical and electron microscopes for imaging cells included the fact that no special coating and vacuum were required and imaging could be done in all environments – air, vacuum or aqueous condition. The AFM imaging of live cells under physiological condition is more complicated and challenging even for experts because cells are soft and easily detached from the substrate. To prevent the detachment of cells during AFM imaging, many researchers utilized cell fixation methods such as chemical fixatives, micropipettes, trap by agar and the pores of filters [2-4]. However, as a result of the fixation, the artifact and depression were

reported during the sample preparation or the measurement process [5]. Murphy M.F. et al. reported that successful imaging of live human cells using AFM was influenced by many variables including cell culture conditions, cell morphology, surface topography, scan parameters and cantilever choice [6]. The glomerular mesangial cell (MC) occupies a central anatomical position in the renal glomerulus. The MC not only can control glomerular filtration, but may also be involved in the response to local injury such as cell proliferation and basem*nt membrane remodeling [7]. Angiotensin II, a potent vasoconstrictor, has a key role in renal injury and in the progression of chronic renal disease of diverse causes [8]. In this study, we performed the imaging of live MC by contact mode AFM. From real time imaging of live cell, we measured the dynamical movement and mechanical change of cells by stimulus such as drug injection.

II. METHODOLOGY A. Cultured Mesangial Cells Sprague-Dawley (SD) rats (150~200g) were recruited for glomerular cell culture. Glomeruli were isolated from their kidneys by the common sieving method through serial steel meshes. Completely purified glomeruli were collected by a micropipet and used for primary culture. Dulbecco’s Modified Eagles’s Medium (DMEM) supplemented with 20% fetal bovine serum (FBS), 10 mg/mL bovine insulin, 4 mmol/L glutamine and antibiotic-antimycotic and 5.5 mg/mL human transferring) was obtained for the primary cell culture medium. Cells were identified as MCs by their spindle shape in phase contrast microscopy, as well as positive staining with anti-smooth muscle actin and negative staining with cytokeratin, and common leucocyte antigen antibodies in immunofluorescent microscopy. We used the MCs from the rat between 4th and 9th passages. B. Preparation for AFM Measurement Contact mode AFM images and force-distance curves were obtained using the NanostationⅡTM (Surface Imaging

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 71–73, 2010. www.springerlink.com

72

G.J. Lee et al.

Systems, Herzogenrath, Germany). Data acquisition and processing were performed by the SPIPTM (Scanning Probe Image Processor, version 4.1, Image Metrology, Denmark). Live MCs were scanned at the resolution of 256×256 pixels with scan speed of 3 line/s. We used the gold coated silicon cantilevers for contact mode and the loading force was adjusted to below 2~3nN. In order to detect real time cell response, angiotensin II and Telmisartan (Sigma, St. Louis, Missouri, USA) were applied at a concentration of 5μM. Once a live cell was identified using the imaging mode, locations for force data were selected. After force curve acquisition was completed, a subsequent image was obtained to make sure that the cell had not shifted.

III. RESULTS AND DISCUSSIONS Figure 1 shows the topography image of live MC in DMEM medium buffered with a HEPES. It was reported that MCs possessed some of the morphological characteristics of vascular smooth muscle cells (SMC), such as bundles of actin filaments [9]. As shown in Figure 1, the images exhibited features associated with cytoskeletal structures, such as actin filaments and other filamentous elements.

Fig. 2 Time series of deflection images of live mesangial cell by 1μM angiotensin II addition (left) before adding angiotensin II (right) 15min after adding angiotensin II Force curves can provide useful information about the physical properties of a cell [10]. The slope of extension curve was used to determine the stiffness of the cell. As shown in Figure 3, MC treated with angiotensin II was stiffer than control MC. But MC treated with angiotensin II and angiotensin II inhibitor (telmisartan) was not as stiff as angiotensin II-treated MCp < 0.0001. Table 1 Calculated spring constants of MCs before and at 20 min after Ang II treatment, in addition 20 min after Ang II and telmisartan treatment Cellular spring constant Kmc (N/m) (n=20) MC before Ang II MC for 20 min after treatment Ang II treatment 0.109 ± 0.019**

0.031 ± 0.009 **

MC for 20 min after Ang II & telmisartan treatment 0.051 ± 0.016*

*

p < 0.0001, p < 0.005

IV. CONCLUSIONS

Fig. 1 AFM topography and deflection images of live mesangial cells in DMEM medium buffered with HEPES Figure 2 shows the effect of angiotensin II on MC from a time series of deflection images of live MC. MC was gradually contracted towards the center with the passage of time after angiotensin II addition.

To our knowledge, this study was the first one tried to image live mesangial cells in the glomelulus by AFM. In order to detect real time cell response, we successfully observed the topography changes of MC by angiotensin II injection, in particular on cytoskeletal dynamics in MC. Simultaneously, elastic changes of MC by angiotensin II and angiotension II inhibitor (telmisartan) was revealed by using force-distance analysis. From this result, we conclude that the contraction of MC by angiotensin II was effectively blocked by telmisartan.

ACKNOWLEDGMENT This research was supported by the research fund from Seoul R&BD (grant # CR070054).

IFMBE Proceedings Vol. 32

Dynamic Movement and Property Changes in Live Mesangial Cells by Stimuli

REFERENCES 1. Binnig G, Quate CF, Gerber C (1986) Atomic force microscope, Phys Rev Lett 56: 930-933 2. Horber JK, Mosbacher J, Haberle W et al. (1995) A look at membrane patches with a scanning force microscope, Biophys J 68: 16871693 3. Grad A, Ikai A (1995) Method for immobilizing microbial cells on gel surface for dynamic AFM studies, Biophys J 69: 2226-2233 4. Kasas S, Ikai A (1995) A method for anchoring round shaped cells for atomic force microscope imaging, Biophys J 68: 1678-1680 5. Moloney M, McDonnell L, O’Shea H (2004) Atomic force microscopy of BHK-21 cells; an investigation of cell fixation techniques, Ultramicrosopy 100: 153-161 6. Murphy MF, Lalor MJ, Manning FCR et al. (2006) Comparative study of the conditions required to image live human epithelial and fibroblast cells using atomic force microscopy, Microscopy research and technique 69: 757-765 7. Schlondorff D (1987) The glomerular mesangial cell : an expanding role for a specialized pericyte, FASEB J 1: 272-281

73

8. Klahr S, Morrissey J (1998) Angiotensin II and gene expression in the kidney, Am J Kidney Dis 31: 171-6 9. Elger M, Drenckhahn D, Nobiling R et al. (1993) Cultured rat mesangial cells contain smooth muscle α-actin not found in vivo, AJP 142: 497-509 10. Volle CB, Ferguson MA, Aidala KE et al. (2008) Quantitative changes in the elasticity and adhesive properties of Escherichia coli ZK1056 prey cells during predation by Bdellovibrio bacterio vorus 109J, Langmuir 24: 8102-8110

The corresponding author: Author: Hun Kun Park Institute: Kyung Hee University Street: 1 Hoegi-dong, Dongdaemun-gu City: Seoul Country: Korea Email: [emailprotected]

IFMBE Proceedings Vol. 32

Cooperative Interactions between Myosin II and Cortexillin I Mediated by Actin Filaments during Cellular Deformation Tianzhi Luo1 and Douglas N. Robinson1,2,3 1 School of Medicine/Department of Cell Biology, Johns Hopkins University, Baltimore, USA School of Medicine/Department of Pharmacology and Molecular Sciences, Johns Hopkins University, Baltimore, USA 3 School of Engineering/Department of Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, USA 2

Abstract— A mechanosensory system consisting nonmuscle myosin II, cortexillin I, and actin filaments has been identified recently. During cellular deformation, myosin II and cortexillin cooperatively accumulate in highly deformed region in response to the applied stress and the extent of accumulation increases with increased stress. The cooperativity is suggested to be mediated by the actin filaments. The accumulation of these proteins increases the mechanical resistance of the cells to against the external load, leading to a diminishing deformation. Keywords— Mechanosensing, Myosin, Actin, Actin crosslinking protein.

I. INTRODUCTION Cells are capable of sensing mechanical stimuli and translating them into biochemical signals, which enables the cells to adapt to their physical surroundings by remodeling their cytoskeletal architectures, activating various signaling pathways, and changing their gene expression [1,2]. These phenomena involve two essential processes, mechanosensing and mechanotransduction. In these processes, force or deformation needs to be transmitted from the outside environment to the proteins and organelles inside the cells. The actin cytoskeleton composed of actin filaments, myosin motors, and actin-crosslinking proteins (ACLPs) plays a critical role in force propagation and in the response to deformation. Recently, we discovered a new mechanosensing phenomenon that myosin II and an ACLP cortexillin I cooperatively accumulate to highly deformed regions in dividing Dictyostelium cells as shown in Fig.1 and the accumulation extent increases with increasing forces as shown in Fig. 2 [3,4]. In addition, the length of the cell in the pipette decreases when the proteins start to accumulate in micropipette aspiration experiments (not shown here). This observation is a typical example of cells protecting themselves by reinforcing their local cytoskeleton in response to external forces.

II. RESULTS AND DISCUSSIONS One possible mechanism of the cooperative accumulation is that the binding of myosin to actin enhances cortexillin binding to actin filament. There are two features of myosin binding to the actin filaments. The first is that the binding lifetime increases with the external forces, which explains the positive proportionality between the accumulation and the applied forces as shown in Fig. 2 and the accelerated accumulation as shown in Fig 3. The second is that myosin alone is able to bind to actin filaments cooperatively and the corresponding transient curves have a sigmoid shape [5, 6], which is thought to be origin of the observed cooperative accumulations. It was proposed that the cooperative binding between two neighboring myosins to a common actin filament was attributed to their elastic interactions mediated by the actin filament. Single molecule measurements demonstrated that the binding of cortexillin I to actin filament is not forcedependent (over a -2 to 2 pN range), suggesting cortexillin alone does not bind to actin filament cooperatively [4].

Fig. 1 Myosin accumulation during micropipette aspiration adapted from ref. [4] Therefore, the elastic deformation in actin filaments caused by myosin binding facilitates the cortexillin binding,

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 74–76, 2010. www.springerlink.com

Cooperative Interactions between Myosin II and Cortexillin I Mediated by Actin Filaments during Cellular Deformation

resulting in the cortexillin accumulation. On the other hand, cortexillin cross-links the actin filaments into network to allow the tension force to build up such that the myosin head can feel the tension and its binding lifetime is then increased. We suspect this kind of cooperative binding might also exist among other ACLPs. We simulate the corresponding two-dimensional reaction-diffusion problems using coarse-grained kinetic Monte Carlo simulation. In our simulations, the diffusion coefficients vary in the range of 0.01~100 μm2/s and the characteristic times of binding/unbinding reactions are in the range of 100 s-1 and 0.01 s-1. The essential mechanism we proposed is that the myosin binding leads to local conformal changes in actin filament filaments, which facilitates cortexillin binding in nearby region. As expected, the cooperativity increases with the strength of the elastic interactions. The simulation shows that myosins and ACLPs accumulate cooperatively as shown in Fig. 4. The kinetics of accumulation behaves as a Hill-type function. The corresponding time-scale of accumulation is also consistent with that in the experiment when physiological values of myosin and cortexillin are utilized.

75

Fig. 3 Kinetics of myosin accumulation adapted from ref. [4]

continues to diminish due to the shrinking of the cell length in the pipette and decreasing of the force applied on each myosin. Using coarse-grained molecular dynamics simulation scheme initially developed by Discher [7] and later improved by Li [8], we demonstrate that local enhancement of cortical stiffness associated with protein accumulation drives the cell to craw away from the pipette. Based on geometry analysis and certain assumed relationships between the moduli and the protein concentrations, we argue that the rate of cell length change has the negative value of the myosin

Fig. 2 Myosin accumulation in different mutants under different applied pressures adapted from ref. [4] The shear modulus and stretch modulus of actin network are known to be dependent on the concentrations of actin, myosin and ACLPs. The accumulation of these proteins enhances the local resistance of actin cortex to external forces and reduces the local strains. As a result, the cell tends to achieve less deformed cell shape, i.e., the shrinking of cell length in the pipet. Meanwhile, the force felt by each myosin decreases with the increased myosin concentration. Therefore, on one hand, myosin and cortexillin cooperatively accumulate in response to the applied force; on the other hand, the driving force for the protein accumulation

Fig. 4 Cooperative accumulation of Myosin and cortexillin in kinetic Monte Carlo simulation. The strain energy is caused by myosin binding to actin filaments

IFMBE Proceedings Vol. 32

76

T. Luo and D.N. Robinson

accumulation rate normalized by the instantaneous myosin concentration. This prediction is confirmed by plotting the slopes of the kinetic data from experiments as shown in Fig. 5.

Fig. 5 The derivatives of myosin accumulation (blue circle) and the shrinking of cell length in pipette (black square)

III. CONCLUSIONS

REFERENCES 1. Wang N, Tytell JD and Ingber DE (2009) Mechanotransduction at a Distance: Mechanically Coupling the Extracellular Matrix with the Nucleus. Nature Rev Mol Cell Biol 10: 75-82. 2. Chien S (2007) Mechanotransduction and Endothelial Cell Homeostasis: the Wisdom of the Cell. Am J Physiol Heart Circ Physiol 292: H1209-H1224. 3. Effler JC, Kee S, Berk JM, Tran MN, Iglesias PA, and Robinson DN (2006) Mitosis-Specific Mechanosensing and Contractile Protein Redistribution Control Cell Shape. Curr Biol 16: 1962-1967. 4. Ren Y, Effler JC, Norstrom M, Luo T, Firtel RA, Iglesias PA, Rock RS, and Robinson DN (2009) Mechanosensing through Cooperative Interactions between Myosin II and the Actin Crosslinker Cortexillin I. Curr Biol 19: 1421-1428. 5. Greene LE, and Eisenberg E (1980) Single-myosin crossbridge interactions with actin filaments regulated by troponin-tropomyosin. Proc. Natl. Acad. Sci. USA 77: 2616-2620. 6. Trybus KM and Taylor EW (1980) Kinetics study of the cooperative binding of subfragment 1 to regulated myosin. Proc. Natl. Acad. Sci. USA 77: 7209-7213. 7. Discher, DE, Boa, EH, and Boey SK (1998) Simulations of the Erythrocyte cytoskeleton at Large Deformation. II. Micropipette Aspiration. Biophys J 75: 1584-1597. 8. Li J, Dao M, Lim CT, and Suresh S (2005) Spectrin-Level Modeling of the Cytoskeleton and Optical Tweezers Stretching of the Erythrocyte. Biophys J 88: 3707-3719. The address of the corresponding author:

We discovered a mechanosensory system in which the cooperative interactions between myosin and the ACLP is suggested to be mediated the actin filaments. We performed simulation at protein level and demonstrated actin filament mediated interaction indeed reproduced certain key features of in vivo observations. We also successfully explained the diminishing of deformation during protein accumulation.

Author: Institute: Street: City: Country: Email:

ACKNOWLEDGMENT We acknowledge the support of the National Institute of Health (Grant #GM066817) and the American Cancer Society (Grant #RSG CCG-114122).

IFMBE Proceedings Vol. 32

Tianzhi Luo Johns Hopkins School of Medicine 725 N. Wolfe Street, Physiology 100 Baltimore USA [emailprotected]

Constitutive Law for Miniaturized Quantitative Microdialysis C.-f. Chen Department of Mechanical Engineering/University of Alaska Fairbanks, Fairbanks, AK 99775-5905

Abstract— Miniaturized microdialysis, a membranesampling technique, is in need for monitoring “tough” molecular substances such as neurotransmitters which exhibit limited diffusivity and fast clearance in synaptic space. This paper uses non-dimensional analysis and combinatorial simulations to predict the sampling performance of miniaturized microdialysis, prior to rigorously prototyping such small devices. As current microdialysis has sampling resolution too rough to meet the needs, one aim of this paper is to understand how best miniaturized microdialysis would improve the sampling performance, and to what degree. Our results of numerical simulations and curve-fitting extrapolation suggest improved temporal resolution (at least ten times better) is achievable, while retaining the relative recovery, a key factor for quantitative microdialysis, at an acceptable level. To the limit of theoretical downscaling in microdialysis, the results also suggest the need for new operation principles for miniaturized microdialysis. Keywords— microdialysis, sampling, miniaturization.

I. INTRODUCTION Microdialysis is an invasive membrane-sampling technique in which a probe is inserted into tissue in vivo, such that one side of a semi-permeable membrane is in contact with extracellular fluid and the other side is flushed with a dialysis fluid (perfusate) that takes-up substances (analyte) from the extracellular fluid through the membrane. When coupled with analytical separation techniques, microdialysis enables online monitoring of targeted bioactive analytes. The ability to continuously sample the extracellular compartment has opened up a wide range of applications of microdialysis in biological sample cleanup [1]; observation of metabolic activity in tissues in humans [2]; and monitoring neurotransmitters in brains [3] since its first presentation in 1966 [4]. Microdialysis also allows for delivery of compounds into targeted extracellular sites [5]. The sampling performance of microdialysis is usually quantified by relative recovery, which is a ratio of the steady-state analyte concentration in the perfusate to the true value in the extracellular fluid. Relative recovery is determined by the probe size and perfusate flow rate. The former is related to the temporal resolution of microdialysis – the larger the probe is, the longer for analyte concentration to reach its steady-state value. The relative recovery

increases as the perfusate flow rate decreases because analyte are continuously flushed out for further analysis. The continuous sampling in microdialysis indeed creates an environment that analyte can never saturate the probe chamber. Interpretation of microdialysis results is typically indirect, based on proportional changes in analyte where flow rate affects relative recovery. Current microdialysis, typically with temporal and spatial resolution of about 600 seconds and 0.1 mm3, respectively [6], is somewhat too coarse to sample neurotransmitters such as glutamate. The application to online and real-time monitoring of neurotransmitters, if made successful, would greatly enhance our understanding of their metabolic implications to behavioral stimuli, drug abuse treatment, and pharmaceutical agent development. The nature of fast clearance and short diffusion distances of neurotransmitters imposes a challenge to existing microdialysis [7]. Apparently, problems associated with the microdialysis devices in use include (relatively) large dead volumes, rough spatial resolution and traumatic tissue damage associated with probe implantation. Large cross-sectional areas cause significant tissue damage that can hamper interpretation of results [8]. Poor spatial resolution (the probe is large relative to the area sampled) results in a reduced ability to sample a desired tissue region. Prolonged temporal resolution, particularly, is a concern for glutamate detection because of the presumed rapid clearance and short diffusion distances associated with glutamatergic synapses [9]. Recent work with small carbon fibers [10] suggests miniaturization of microdialysis for better sampling resolution. The advancement in microfabrication enables the miniaturization of microdialysis, assuming that operation principle still holds for microdialysis at the small scale. This paper predicts the performance of miniaturized microdialysis by calculating the relative recovery and temporal resolution (i.e., the time to reach the steady state in sampling). The predictive modeling is based on non-dimensional analysis and combinatorial simulations. The former characterizes the sampling process, while the latter uses various combinations of model parameters for quantification. The results are curve-fitted for predicting the performance of miniaturized microdialysis and extrapolating the limit of miniaturization. Finally, we conclude out work by discussing an important implication of the scaling law in microdialysis.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 77–80, 2010. www.springerlink.com

78

C.-f. Chen

II. IMPLICIT MODEL FOR QUANTITATIVE MICRODIALYSIS Microdialysis essentially creates a concentration gradient for sampling by diffusion. As implanted in tissue, the microdialysis probe is continuously supplied with a clean perfusate such as to build a concentration gradient inclining from within the probe chamber (through the porous membrane) to the extracellular space, allowing molecular particles diffuse through the membrane into the probe chamber. As particles enter the probe chamber, they are flushed by the perfusate flow to the outlet for further chemical analysis. The chamber thus can never be saturated, thus allowing new particles to come in. In in-vivo applications microdialysis sampling is usually operated under the steady state which is equilibrium between convectivity (attributed by the flushing capacity of perfusate flow) and diffusivity (of analyte). A two-dimensional model (Fig. 1) is used to quantify microdialysis. This model, for the consideration of microfabricated prototypes, is described by the rectangular coordinate and is different from the conventional models which were all described by the cylindrical coordinates [11, 12]. The model in Fig. 1 shows a portion of the microdialysis probe in the proximity of the porous membrane through which analyte diffuse from the extracellular fluid, through the membrane, into the probe chamber. The perfusate fluid flows through the chamber from left to right. (Microdialysis usually uses a syringe pump to drive the flow.) Owing to the small channel size, it is appropriate to model the perfusate flow as the Poiseuille flow. A non-slip boundary condition along the interior wall of the channel is imposed. Among many parameters influencing the relative recovery of microdialysis [11], eight parameters, as highlighted in Fig. 1 and listed in Table 1, pertain to the microdialysis probe. The performance of microdialysis is usually quantified by relative recovery rr, a dimensionless parameter defined as the ratio between c and c∝. It is an implicit function of the other six parameters: rr =

(

c = f Vavg , μ , ρ , D , H , A c∞

)

will not affect the steady-state distribution of analyte in the chamber because a face diffusion problem in three dimensions is equivalent to a line diffusion problem in two dimensions [13].

Fig. 1 Schematic of microdialysis. The performance is governed by the variables shown Table 1 Microdialysis parameters and their units Symbol

Dimension

average speed of perfusate flow

Parameter

Vavg

Lt-1

dynamic viscosity of perfusate

ML-1t-1

density of perfusate

μ ρ

coefficient of diffusion of analyte

D

L2t-1

characteristic dimension of channel

H

L

area of semi-permeable membrane

A

L2

concentration of analyte in channel

c

L-3

concentration of analyte in tissue

c∝

L-3

III. NONDIMENSIONAL ANALYSIS The scaling effect on the relative recovery of miniaturized microdialysis can be illustrated by performing nondimensional analysis on Eq. (1) [14]: rr =

(1)

Noted that the diffusion coefficients of analyte in the porous membrane and the extracellular fluid have been excluded from Eq. (1), since this paper is aimed at the scaling effect of probe dimension to the microdialysis performance. We simply applied a constant diffusion coefficient DMEM = 108 μm/s2 (membrane) and DECS =367 μm/s2 (extracellular fluid) in the simulations. The use of the constants DMEM and DECS in our model is appropriate for only describing an instrumental tune-up of microdialysis devices such as the in-vitro microdialysis, as the content of this work. We also assume that the membrane has an equal out-of-plane dimension to that of the probe. The choice of the out-of-plane dimension

ML-3

⎛ HVavg c A HVavg ρ ⎞ , 2 , = f⎜ ⎟ ⎜ μ ⎟⎠ c∞ H ⎝ D

(2)

The relative recovery is governed by three dimensionless groups: HVavg /D (the Péclet number), A/H2 (the membraneto-channel area ratio), and HVavgρ /μ (the Reynolds number). The above equation sheds light into the physics underlying microdialysis. The steady-state analyte concentration in the probe chamber, as formulated by the relative recovery, is dictated by two competing factors, the diffusivity of analyte that increases the concentration, and the drifting speed of perfusate flow that decreases the concentration. Smaller probes and larger membrane areas are advantageous of higher relative recovery. A large Reynolds number will depreciate the relative recovery, which is concluded by holding μ,Vavg, and ρ but H.

IFMBE Proceedings Vol. 32

Constitutive Law for Miniaturized Quantitative Microdialysis

79

IV. CONSTITUTIVE LAW FOR QUANTITATIVE MICRODIALYSIS To develop an explicit version of Eq. (2) in order to quantify the relation among the relative recovery and other parameters, we also formulate the following equations of combined diffusion-drifting to describe the sampling process [14]: ∂c = DECS ∇2 c ∂t ∂c = DMEM ∇2c in membrane ∂t ∂c ∂c + υx = DCHM ∇2c in probe chamber ∂t ∂x

in extracellular fluid

(3) (4) (5)

These equations describe the transportation of analyte, in the continuum sense, that diffuse from within the extracellular fluid, through the semi-permeable membrane, and then into the probe chamber in which perfusate fluid flows. The transport problem was implemented in Matlab by the finite difference method. One typical steady-state distribution of the analyte concentration is shown in Fig. 2, in which the horizontal dimension spans the membrane length. A constant line source in the extracellular fluid has been designated at the bottom of the problem domain. Such an arrangement is suitable for modeling microdialysis in vitro, a scenario of placing a microdialysis probe in a large, wellstirred solution reservoir for instrumental performance quantification. A reflective boundary condition was imposed at the top of the domain where the chamber wall is. No particle is allowed to accumulate at the left- and right-hand sides of the membrane. The perfusate flow (in the camber) is modeled as a Poiseuille flow. Since the dimensions considered in this study are all in the range of sub-millimeters, we assumed that there is no pressure drop across the chamber in the direction of flow. The time when the distribution of concentration reaches its steady state is detected by monitoring the concentration profile of analyte at a few places in the chamber. Once all the monitored concentrations are fluctuated within a preset range (10-6 in all our simulations) for 10 more time steps, the simulation is deemed to enter the steady state and is terminated. In the remaining illustrations the relative recovery is defined as the maximum value of the concentrations averaged along each vertical line in the chamber to represent the relative recovery. The same procedure is repeated for another seventy four combinatorial trials of different values for the six parameters of the three dimensionless groups (Eq. (2)). All the cases simulated have a channel of cross-sectional area less than 2000 μm2. For each trial we recorded the corresponding relative recovery and the time to reach the steady state. The results are plotted against the Reynolds number (Re) for seven categories of the membrane-to-channel ratio (p2*)

in Fig. 3. All the cases simulated are in the low Reynolds number regime and have the steady-state time shorter than 5 seconds. The data points in the Re range of [0.001, 0.1] segregate into an observable pattern and are thus curvefitted by power-law functions. The fit curves suggest a Redependent design guideline, by which, with a given Re value, the relative recovery may achieve a level defined by the rr* value in a duration defined by the corresponding ss* value. The data points labeled (2), (5), and (6) in Fig. 3 show three cases, per the design guideline, and the corresponding distribution of the steady-state analyte concentration. The data points above the fitted rr* curve such as shown by Case (3) and (7) represent a design scenario for higher relative recovery. Data point (1) illustrates an undesirable design, which corresponds to low relative recovery. Data point (4), among other sparsely distributed points locating in the very low Re range, illustrates the result of a nearly stop flow, an impractical setup for the continuousflow based microdialysis. The problem domains of all the seven cases illustrated in Fig. 3 are proportionally scaled to Case (5), in which the dimensions are in the unit of micrometers and the color bar (atop) can be best demonstrated in color.

chamber

membrane extracellularspace

Fig. 2 A typical distribution of steady-state analyte concentration in microdialysis. The domain has an x-dimension 100 μm representing the membrane length and is comprised of three horizontal regions, the 10-μmthick extracellular space (bottom), 6-μm-thick membrane (middle), and 20μm-high chamber (top). The diffusion coefficients for the three media are DECS = 367 μm2/s, DMEM = 108 μm2/s, and DCHM = 760 μm2/s, respectively. An aqueous fluid (ρ = 1g/cm3, μ = 0.09 cp at 20oC) is used, which flows rightwards through the chamber with a parabolic velocity profile in which the peak velocity (along the middle streamline) is V0 =240 μm/s. The concentration level is indicated by the color bar atop. A constant line source is imposed at the bottom of the problem domain with a constant concentration level of 1 (i.e., c∝ = 1). The averaged relative recovery at the steady state is 0.462. The steady state is reached at 2.1 seconds after the analyte begin to diffuse from the source line at the bottom of the domain For microdialysis operated under large Péclet numbers (e.g., Case (5) in Fig. 3) the analyte exhibit weaker diffusivity than convectivity (by flow flushing), thus analyte have less change to arrive the top portion of the chamber. On the opposite, Case (4) is associated with a low Péclet number, at

IFMBE Proceedings Vol. 32

80

C.-f. Chen

which the analyte quickly permeate through the entire chamber before perfusate flow flushes them out. Some cases associated with a relatively large Re value (e.g., Case (5)) are with a relatively fast perfusate flow rate (e.g., 1.2 mm/s). The back pressure, an unavoidable issue in any continuous flow and an undesired factor in microdialysis, will be ridiculously amplified in microchannels [14] and would be a bottleneck for miniaturizing the microdialysis technique. Is continuous perfusion still effective and efficient in microchannels? One possible solution to this question is to seek for a design which operates microdialysis at very low Reynolds numbers (such as Case (4)).

V. CONCLUSIONS The microdialysis sampling has been formulated and simulated in this paper, with a focus on the temporal performance a miniaturized microdialysis probe. The performance is essentially decided by the diffusivity of analyte and convectivity of perfusate flow. The fit curves in Fig. 3 are believed to be an optimal design criterion, by which the relative recovery can be best achieved in the shortest time. This hypothesis needs further justification. At very low Re numbers the relative recovery becomes less predictable in our results, suggesting something new that cannot be explained by Eq. (2). It triggers a question: “Does a stop flow applicable in miniaturized microdialysis?” The answer to this question, a solution to reduce the back pressure issue, is a key to success of miniaturization of microdialysis.

REFERENCES

Fig. 3 Scaled relative recovery (rr*) and scaled equilibrium time (ss*) vs. Reynolds number (Re). p2* resembles the membrane-to-channel area ratio by p2* = A/(100w) (μm2/μm2) + H/100 (μm/μm) where w is the value of the channel’s third dimension (out of the plane). rr* = rr/p2*, and ss* = (ss)V0/sqrt(w/AH) (s-μm/s/μm) where V0 is defined in the Fig. 2 caption. Seven data points, as labeled (1)~(7), are chosen to illustrate the associated steady-state distribution of analyte concentration as inset in the rr*-Re plot. All the seven illustrations are scaled to the dimensions defined in (5)

1. Wang P C, DeVoe D L, Lee C S (2001) Integration of polymeric membranes with microfluidic networks for bioanalytical applications. Electrophoresis, 22:3857-3867. 2. Benjamin R K, Hochberg F H, Fox E et al. (2004) Review of microdialysis in brain tumors, from concept to application: first annual carolyn Frye-Halloran symposium. Neuro-Oncology,65-74. 3. Bourne J (2003) Intracerebral Microdialysis: 30 Years as a Tool for the Neuroscientist. Clin.Exp. Pharmacol. & Physiol., 30:16-24. 4. Bito L, Davson H, Levin E et al. (1966) The concentrations of free amino acids and other electrolytes in cerebrospinal fluid, in vivo dialysate of brain and blood plasma of the dog. J. Neurosci., 13:10571067. 5. Drew K L, Ungerstedt U (1991) Pergolide presynaptically inhibits calcium-stimulated release of gamma-aminobutyric acid. J. Neurochem., 57:1927. 6. Watson C J, Venton B J, Kennedy R T (2006 ) In vivo measurements of neurotransmitters by microdialysis sampling. Anal. Chem.,13911399. 7. Drew K L, Pehek E A, Rasley B T et al. (2004) Sampling glutamate and GABA with microdialysis: suggestions on how o get the dialysis membrane closer to the synapse. J. Neurosci. Methods, 140:127-131. 8. Bungay P M, Newton-Vinson P, Isele W et al. (2003) Microdialysis of dopamine interpreted with quantitative model incorporating probe implantation trauma. J. Neurochem., 86:932. 9. Cragg S J, Rice M E (2004) Dancing past the DAT at a DA synapse. Trends Neurosci., 27:270-277. 10. Allen C, Peters J L, Sesack S R et al. (2001) Microelectrodes closely approach intact nerve terminals in vivo, while larger devices do not: A study using electrochemistry and electron microscopy. Monitoring Molecules in Neuroscience. Proceedings of the International Conference on In Vivo Methods, 9th, pp 89-90. 11. Bungay P M, Morrison P F, Dedrick R L (1990) Steady state theory for quantitative microdialysis of solutes and water in vivo and in vitro. Life Sci., 46:105-119. 12. Benveniste H, Hüttemeier P C (1990) Microdialysis: theory and application. Prog. Neurobiol., 35:195-215. 13. Crank, J. (1975) The Mathematics of Diffusion, 2nd edn. Oxford Univ. Press, Oxford. 14. Chen C, Drew K L (2008) Droplet-based microdialysis concept, theory, and design consideration. J. Chromatogr. A, 1209:29-38.

IFMBE Proceedings Vol. 32

Non-invasive Estimation of Intracranial Pressure by Means of Retinal Venous Pulsatility S. Mojtaba Golzan, Stuart L. Graham, and Alberto Avolio Australian School of Advanced Medicine, Macquarie University, Sydney, Australia

Abstract— Current techniques used to measure intracranial pressure (ICP) are invasive and require surgical procedures in order to implant pressure catheters in brain ventricles. The amplitude of central retinal vein pulsations (RVPa) has been shown to be associated with the pressure gradient between intraocular pressure (IOP) and ICP. When IOP approaches ICP, the pressure gradient drops, leading to cessation of RVPa. In this study we aim to investigate this relationship and define a new method to estimate ICP non-invasively. 10 healthy subjects (mean age 35±10) with clear medical history were included in this study. Baseline IOP was measured (Goldman tonometers) and RVP recorded using Dynamic Vessel Analyser.IOP was decreased actively using 0.5% Iopidine and RVP recorded simultaneously every 15 minutes. Digital signal processing techniques were used to measure mean RVP peak-to-peak amplitude in each cardiac cycle at different IOP levels. Linear regression equations were used to extract a relation between IOP and RVPa and to estimate the pressure at which RVPa cease (i.e. RVPa=0). At this point ICP equals IOP. IOP and ICP pressure waveforms were simulated in order to estimate ICP continuously. Results show a linear relationship between RVPa and IOP such that RVP decreases with IOP reduction. Estimated ICP ranged between 2-13.7 mmHg, all falling in the normal physiological range (i.e. 0-15 mmHg). Analysis of retinal venous pulsation in accordance with IOP may introduce a novel approach for estimation of ICP non-invasively. Keywords— Intracranial Pressure, Pulsations, Non-invasive measurement.

Retinal

Venous

I. INTRODUCTION The measurement of the absolute value of intracranial pressure (ICP) is important in diagnosing and treating various pathophysiological conditions caused by head trauma, hemorrhage tumors and inflammatory diseases. Conventional invasive ICP measurement techniques require surgical passage through the skull bone into the brain ventricles, parenchyma or the region between the skull and dura matter to implant a measuring transducer. Such invasive techniques, however, are undesirable, as damage to the sensitive brain tissues may result. Moreover, due to the invasive nature of the procedures, this induces a risk of

infection. The cerebrospinal fluid (CSF) produced at the choroid plexus of the brain ventricles circulates around the cranium through the subarachnoid space. The CSF compartment surrounds the optic nerve and extends to the posterior aspect of the globe right up to the lamina cribosa at the optic nerve head. The central retinal vein (CRV) is contained within the optic nerve at this point and therefore it is subject to pressure interaction between CSF and IOP. Baurmann [1] originally modeled the loss of pulsations of the CRV with intracranial hypertension. This finding was later supported in a clinical study by Kahn [2]. Levine [3] proposed the constant inflow variable outflow (CIVO) theory which described the disappearance of the retinal venous pulsations (RVP) during intracranial hypertension. During an increase in cerebrospinal fluid pressure (CSFp) (also known as the ICP), the CSF pulsations and the mean CSFp rise [4] and approach the intraocular pulse pressure decreasing the intravascular pressure gradient over the prelaminar and retrolaminar optic nerve and leading to cessation of the RVPa. Levine’s hypothesis was supported by Jacks [5] who suggested that the pulsations occur due to a pressure gradient along the central retinal vein as it traverses the lamina cribrosa. Levin [6] found that these pulsations were present in 87.6% of 146 unselected subjects 20-90 years of age and were absent in 100% of 33 patients with increased intracranial pressure. He concluded that the presence of spontaneous venous pulsations was a reliable indicator of an intracranial pressure below 13-14 mmHg. Various systems and methods for the non-invasive measurement of ICP have been suggested. Among these studies several attempts have been made to use the ocular circulation to approach the ICP [7-10]. Existing techniques for non-invasive estimation of ICP including opthalmoscopic examination for evidence of papilledema in adults or the palpation of the fontanelles and skull sutures in infants are highly qualitative and do not necessarily correlate directly with ICP measurements. The present study examined the relationship between IOP and RVPa and demonstrates a relation between these two parameters in order to estimate ICP non-invasively.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 81–84, 2010. www.springerlink.com

82

S.M. Golzan, S.L. Graham, and A. Avolio

simulated using the following equation [11] and estimated mean ICP is added to the simulated waveform:

II. METHODS Ten healthy subjects (35 ±10yrs) with no history of eye disease, a normal fundus on ophthalmoscopy with no vascular changes or signs of raised ICP were included. Baseline IOP was measured using a tonometer (Goldman). The RVPa was then recorded non-invasively for 100 seconds (inferotemporal vein 1 disc diameter from optic disc) using the Dynamic Retinal Vessel Analyser (Imedos, Jena, Germany). IOP was lowered using aproclonidine 0.5% (Alcon, Fort Worth ) and was measured every 15 minutes, each time followed by a further 100 seconds RVPa recording from the same site (i.e. inferior temporal vein). Heart rate (HR) was also recorded throughout. Mean RVPa was subtracted from recorded RVPa and was passed through a low pass filter with a cut-off frequency of 30 Hz (Fig.1-a). A moving average algorithm was applied in order to remove baseline wandering (Fig 1b). Then the recordings were rectified (Fig 1-c) and peaks were detected. A threshold of mean peak amplitude ± 0.5 peak amplitude was used to eliminate the undesired artefact peaks collected at the time of recording (Fig 1-d). Selected peaks were plotted against the intraocular pressure. According to CIVO hypothesis [3] and other studies [5] described previously the RVPa amplitude is associated with the pressure difference between the IOP and ICP:

2

(3)

Where and . A and B define the ICP pulse pressure, and were chosen as: A=1 and B=0.5 Figure 3 is the diagram of the methodology discussed above.

a

c

b

i.e. then

(1)

where K is a constant. According to equation (1), when venous pulsations cease to be present (i.e. RVPa=0), then ICP=IOP. Linear regression equations were used to relate changes in the RVPa peaks and IOP. Figure 2 is an example of this relation. Based on the linear regression obtained between RVPa peaks and IOP, it is possible to estimate ICP. K was measured and averaged using an iterative method. Based on this method 9 subjects were used to define K and 1 subject for testing. Rearranging equation (1): (2)

d

Fig. 1 (a-d): a- recorded RVPa passed through low pass filter, b- baseline wandering removed from recorded RVPa using moving average algorithm, c- signal rectified, d- peaks detected with threshold

According to equation 2, mean ICP could be estimated using the baseline mean IOP, mean RVPa and the measured K. This equation could also be used to estimate the mean ICP of each individual cardiac cycle using the related RVPa in that cardiac cycle. This equation is applied for each individual cardiac cycle of RVPa and the relevant mean ICP is measured in each cardiac cycle. ICP waveforms are then

IFMBE Proceedings Vol. 32

83

Table 1 Baseline IOP, RVPa and estimated ICP for our subjects

25 20 15

Subject No

Baseline IOP (mmHg)

Mean RVPa (µm)

Estimated ICP (mmHg)

1 2 3 4 5 6 7 8 9 10

16 16 12 19 12 15 15 12 14 18

9.2 5.3 12.8 15.6 7.5 12.7 7.8 5.2 19.1 11.2

9 9.3 6.4 13.7 8.9 9.2 2.7 2.2 4.9 12.4

10 5 0 6

6.4

RVP peaks (micro meters)

Non-invasive Estimation of Intracranial Pressure by Means of Retinal Venous Pulsatility

8

10

12

IOP (mmhg)

Fig. 2 Linear regression line used to relate changes in RVPa amplitudes to IOP changes

Continuous estimated ICP- b 12

8

11

ICP-mmhg

ICP-mmhg

Continuous estimated ICP- a 10

6 4 2

10 9 8 7

6 0

1

2

3

4

5

Time (second)

Fig. 3 Overall schematic used to simulate and estimate ICP

1

2

3

4

5

Time (second)

Fig. 4 a- ICP estimated based on RVPa peaks and IOP, dashed line is the mean ICP estimated from the regression equation (as shown in figure 2), the error is 3% between continuous and mean estimated ICP . b- Second tested subject, error is 5% Table 2 Changes of IOP, Heart rate and mean RVPa

III. RESULTS Table 1 is the baseline mean IOP, RVPa and estimated ICP recorded from all subjects. Average K was 2.18 ±0.8. The RVPa peaks decreased consistently as IOP decreased. Mean ICP in each cardiac cycle was estimated using equation 3. These values were then added to the simulated ICP waveform. Two of the subjects were used to test the algorithm. Figure 4 shows continues waveforms of ICP for each cardiac cycle (solid line) and the overall mean ICP measured from the linear regression equations (dashed line). The error between the solid line and dashed line is 3% and 5% in the two figures respectively. Minimum estimated ICP was 2.2 mmHg and the maximum was 13.7 mmHg. We observed that ICP correlated with the height of the subjects (Height (cm)= -1.55*ICP (mmHg)+186.9 , R2=0.4 ). Mean RVPa for all 10 subjects fell from 10.75 µm at the baseline to 3.26 µm at the lowest IOP. Table 2 shows the changes in mean IOP, RVPa and Heart rate at the baseline and the lowest IOP for all of our subjects (i.e after 45 minutes).

t= 0 minutes Baseline

t= 45 minutes

P value

Mean IOP (mmHg)

15.5 ±2.9

10.8 ±2.9

p0.1

Mean RVPa (µm)

10.75±4.9

3.26±1.28

P 91%. Therefore, the hypothesis that: The glutathione patch worn 12 hours daily for 4 weeks significantly improves cellular physiologic functional status in different organs was accepted as true.

REFERENCES 1. Pressman AH (1997) Glutathione: The Ultimate Antioxidant. St Martin’s Press, New York, NY. 2. Lyons J, Rauh-Pfeiffer A, Yu AY, Lu XM et al (2000) Blood glutathione synthesis rates in healthy adults receiving a sulfur amino acid-free diet. Proceedings of the National Academy of Sciences, vol. 97, No. 10, 5071-5076. 3. Wu G, Fang Y, Yang S et al (2004) Glutathione Metabolism and Its Implications for Health. The Journal of Nutrition, 134: 489–492 4. Townsend DM, Tew KD, Tapiero H (2003) The importance of glutathione in human disease. Biomedicine & Pharmacotherapy 57: 145– 155 5. Haltiwanger S (2009) A New Way to Increase Glutathione Levels in the Body. Hippocrates Magazine. Vol 28, Issue 1, 48-49 6. Haltiwanger S (2009) LifeWave Skin Care Patch Instructions. http://www.lifewave.com/pdf/Papers/SciencePaper004GlutathioneSkinPatch.pdf 7. Electro Interstitial Scan (EIS) System (2009). http://www.ldtechnologies.com 8. Bard AJ, Faulkner LR (2001) Electrochemical Methods. Fundamentals and Applications. 2nd Ed. Wiley, New York 9. We are listening to the body signals! (2009) http://www.ldteck.com 10. Van De Water JM, Miller TW, Vogel RL, et al (2003). Impedance cardiography: the next vital sign technology? Chest;123:2028-33. 11. Critchley LAH (1998). Impedance cardiography. The impact of new technology. Anaesthesia. 53:677-84. 12. Cotter G, Schachner A, Sasson L, et al (2006). Impedance cardiography revisited. Physiol Meas. 27:817-27. 13. http://www.fda.gov/cdrh/pdf/p970033.html 14. Fricke H, Morse S (1926). The electric capacity of tumors of the breast. J Cancer Res. 16:310- 376. 15. Morimoto T, Kinouchi Y, Iritani T, Kimura S et al (1990). Measurement of the electrical bio-impedance of breast tumors. Eur Surg Res. 22:86-92.

The address of the corresponding author: Author: Homer Nazeran PhD, CPEng (Biomed.) Institute: Department of Electrical and Computer Engineering Street: 500 West University Ave, University of Texas at El Paso City: El Paso, Texas 79968 Country: United States of America Email: [emailprotected]

IFMBE Proceedings Vol. 32

Nanoscale Carnosine Patches Improve Organ Function Homer Nazeran1 and Sherry Blake-Greenberg2 1

2

BioMedEng Consulting, El Paso, Texas, USA Health Integration Therapy, Paolos Verdes, California, USA

Abstract–– Carnosine (β-alanyl-L-histidine) is a naturally occurring dipeptide present in brain, cardiac muscle, stomach, kidney, olfactory bulbs and in large quantities in skeletal muscle. As free-radical-induced damage to the cells is an important factor in causing aging and senile diseases, carnosine has the potential ability to prevent and treat diseases such as atherosclerosis, diabetes, Alzheimer’s and senile cataract. Recent clinical research shows that carnosine has the ability to rejuvenate senescent cells and delay eyesight impairment and cataract, which are manifestations of the aging process. These results provide valuable data in favor of considering carnosine as a natural anti-aging substance. Bioelectrical impedance data indicative of cellular physiologic organ function (status), using an Electro Interstitial Scanning (EIS) system, were acquired from twenty volunteers:7 males and 13 females, 19-83 (mean 43) years, 118-185 (mean 150) lbs in weight, and 5’-6’ (mean 5’,5”) ft in height. Cellular physiologic function testing was evaluated in 10 organs (pancreas, liver, left/right kidneys, intestines, left /right adrenal glands, hypothalamus, pituitary and thyroid glands) while wearing a nanosclae carnosine patch for 2 weeks. EIS testing was repeated each week. Cellular physiologic function baseline data were acquired from all subjects at the beginning of the study period before application of the nanoscale carnosine patch. Subjects were instructed to keep well hydrated during the study period. All subjects served as their own control. The hypothesis to be tested was: The carnosine patch worn 12 hours/day on alternate days for two weeks significantly improves cellular physiologic functional status in different organs. Statistical analyses revealed that the carnosine patch worn 12 hours daily on alternate days (Tuesdays, Thursdays, and Saturdays) over a period of 2 weeks produced a very significant (p < 0.01) improvement in the physiologic functional status of the pancreas, liver, right kidney, left /right adrenals, hypothalamus, pituitary and thyroid glands with an average statistical power >95%. Keywords— Nanotechnology, Carnosine patch, Aging, Cellular physiologic function measurements, Electro interstitial scan (EIS) system, LifeWave.

I. INTRODUCTION Carnosine termed an “amazing anti-aging nutrient” is a dipeptide molecule comprised of 2 amino acids: beta-alanine and L-histidine. It was first isolated from meat extracts by Russian scholars Gulewitsch and Amiradzibi in 1900 [1]. It

is a naturally occurring (endogenously synthesized) molecule present in brain, cardiac muscle, stomach, kidney, olfactory bulbs and in large quantities in skeletal muscle [2]. Many studies on biological and biochemical effects of carnosine have suggested that it possesses antioxidant and free radical scavenging properties [3]. Free radicals are dangerous byproducts of normal metabolic processes converting food to energy. Free radicals are unstable oxygen-containing molecules, which are hungry for electrons to quench their insatiable desire for cell destruction. Carnosine like its “dancing partner” glutathione is an antioxidant that serves as an endogenous defense against the harmful effects of free radicals, by quenching the destructive free electrons in these molecules. The balancing act between free radicals and antioxidants could be easily disrupted for any reason such as when the body is under stress, fighting an infection or inflammation or healing from an injury, in which case more free radicals are generated. Free radicals are also created when the body is exposed to cigarette smoke, alcohol, ultraviolet light, heavy metals, air pollution, pesticides, food additives, and other environmental toxins. Free radicals are the underlying cause of a variety of illnesses in the body [4]. They are also one of most important possible causes of aging and senile diseases [5]. The literature shows that the emergence and development of aging are closely related to free-radical-induced damage to cells. Free radical damage leads to instability and malfunctioning of the cells, which consequently cause senile diseases such as atherosclerosis, diabetes, Alzheimer’s disease, and senile cataract. Research on the biological and biochemical effects of antioxidants and free radical scavenging molecules such as glutathione and carnosine has shown that these compounds have the ability to protect cells from the harmful effects of free radicals and therefore could exert a normalizing function on cell metabolism and therefore serve as endogenous anti-aging compounds. Extensive preliminary research by Russian scholars have shown that carnosine has a variety of beneficial effects including an increase in muscle strength and endurance, protection against radiation damage, enhancement of immunity and reduction of inflammation, protection against formation of ulcers and their treatment, treatment of burns, promotion of wound healing after surgery, improvement of appearance, etc.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 138–141, 2010. www.springerlink.com

Nanoscale Carnosine Patches Improve Organ Function

In a review, Quinn et al [6], suggest that carnosine and its related dipeptides could be considered as the water-soluble counterpart to lipid-soluble antioxidants such as vitamin E and serve to protect cells from oxidative damage. They refer to numerous studies that have demonstrated strong and specific antioxidant properties of these compounds both at the tissue and organelle level. They describe that carnosine and its related dipeptides play a number of roles such as neurotransmitters, modulation of enzymic activities and chelation of heavy metals. They also describe that these compounds have antihypertensive, immunomodulating, wound healing and antineoplastic effects. Hipkiss et al [7], present evidence to suggest that carnosine in addition to its antioxidant and oxygen free-radical scavenging activities, also reacts with deleterious aldehydes to protect susceptible macromolecules. They propose that the role of carnosine and its related dipeptides should be explored in pathologies that involve deleterious aldehydes, for example, secondary diabetic complications, inflammatory phenomena, alcoholic liver disease, and possibly Alzheimer's disease. For a more detailed study on carnosine beneficial effects please refer to the references listed in reference [8]. The current methods of oral supplementation with carnosine would take 1-4 months to show any significant effects. Marios Kyriazis MD has performed a preliminary experiment using L-carnosine supplements (50 mg daily) on 20 healthy human volunteers, aged 40-75 years, for a period of 1-4 months. He reports “No side affects were reported. Five users noticed significant improvements in their facial appearance (firmer facial muscles), muscular stamina and general well-being. Five others reported possible benefits, for example better sleep patterns, improved clarity of thought and increased libido. The rest did not report any noticeable effects. This is not surprising because supplementation with carnosine is not expected to show any significant noticeable benefits in a short time, but it should be used as an insurance against deleterious effects of the aging process. If any benefits are noted, these should be considered as an added extra bonus. It is worthwhile persevering with the supplementation long term, even if you do not experience any obvious benefits, as you will still be well protected against aging. Carnosine can be used together with vitamin E and/or Co-enzyme Q10 for full antioxidant protection, but even if it is used on its own it should still confer significant protection both against free radicals and against glycosylation.” [9] Our study is the first pilot investigation of its kind to explore the effect of the carnosine patch on organ physiologic function. Bioelectrical impedance data indicative of cellular physiologic function were acquired using an EIS system.

139

Cellular physiologic function in subjects were evaluated in 10 organs (pancreas, liver, left and right kidneys, intestines, left and right adrenal glands, hypothalamus, pituitary and thyroid glands) while wearing the carnosine patch for a period of 2 weeks, 12 hours/day on alternate days of the week (Tuesdays, Thursdays and Saturdays). Physiologic function testing was repeated each week. Each visit was approximately 1 hour in duration. Physiologic function baseline data were acquired from all subjects at the beginning of the study period before the carnosine patch was worn. Subjects were instructed to keep well hydrated (as 16% of body fluids is extracellular fluid), during the study period. All subjects served as their own control. The overall data in this study demonstrated that the carnosine patch worn 12 hours daily on alternate days over a period of 2 weeks produced a very significant (p < 0.01) improvement in the physiologic functional status of the pancreas, liver, right kidney, left and right adrenals, hypothalamus, pituitary and thyroid glands with an average statistical power of at least 95%. Therefore, the hypothesis was accepted as true.

II. MATERIAL AND METHODS Subjects: Twenty volunteer subjects: 7 males and 13 females, 19-83 (mean 43) years of age, 118-185 (mean 150) lbs in weight, and 5’-6’ (mean 5’,5”) ft in height participated in this study. They wore the carnosine patch for 12 hours daily, on alternate days of the week (Tuesdays, Thursdays and Saturdays) for 2 weeks. After giving informed consent, cellular physiologic function baseline data were acquired from all subjects at the beginning of the study period before the carnosine patch was worn and then weekly afterwards. Subjects were instructed to keep well hydrated during the study period. All subjects served as their own control. The subjects were instructed to place the carnosine patch 2 inches inferior to the navel (below belly button) or on CV6 acupuncture point according to manufacture’s instructions. Carnosine Patch: For this research, the nanoscale carnosine patch (LifeWave, La Jolla, California, USA) was used. The carnosine patch is described as a new method for increasing carnosine levels by stimulating acupuncture points on the body with a combination of pressure and infrared energy. “The carnosine patch is a non-transdermal patch that does not put any chemicals or drugs into the body. The carnosine patch contains natural nontoxic crystals that absorb body heat to generate infrared signals that cause the body to produce more endogenous carnosine.” The patch remains active for 12 hours. The carnosine patch is termed the “dancing partner” of the glutathione patch and seems to enhance and complement its physiological effects.

IFMBE Proceedings Vol. 32

140

H. Nazeran and S. Blake-Greenberg

Electro Interstitial Scan (EIS) System and Measurements: An EIS system (LD Technology, Coral Gables, Florida, USA), a programmable electro medical device, was deployed to acquire bioelectrical impedance measurements indicative of cellular physiologic functional status in 10 organs. The EIS system is a French device, classified as a Biofeedback Class 2 device in the United States (FDA product Code: HCC). Recently the FDA has approved a number of alternating current (ac) bioelectric impedance (BIM) devices for use in cardiology and oncology [10 -15]. Before EIS measurements were made on subjects, four operational tests were carried out automatically by the device: power supply test, channel test, volume and conductivity measurement and correspondence tests, as well as cable and precision control tests. Electrodes and electrode application sites were prepared following manufacturer’s instructions. Under software control the hardware delivers a sequence of three 1.28V pulses: 22 ac pulses, 1 second each, at 50KHz (at 0.6 mA, energy/pulse=0.77 mJ); 22 dc pulses, 1 second each (at 0.6 mA, energy/pulse=0.77 mJ); and another set of 22 dc pulses, 3 second in duration for each pulse (at 0.6 mA, energy/pulse=0.77 mJ) to 6 electrodes. These electrodes (2 disposable Ag/AgCl applied to the forehead, 2 reusable polished stainless steel hand electrodes, and 2 reusable polished stainless steel foot electrodes) form 22 different electrode pair (sensing) configurations and measure the intensity of interstitial fluid conductivity (by applying Maxwell’s equation) from which on-screen 3-D models of the human body organs are generated. The measurements are scaled on a scale of -100 to +100. As DC current only passes through the interstitial fluid (16% of the body’s total water), the device could measure the composition of interstitial fluid as well as other biochemical parameters and detect ionic abnormalities. Inclusion Criteria: Inclusion criteria for participation in this study were healthy and functional individuals who were willing to wear the carnosine patch and participate in the study for a period of two weeks. Participants also agreed not to start with any other new therapy or methods of healing and/or make any major changes in their daily life that could alter the efficacy of the study. Subjects must not have worn the carnosine patch prior to the study. Subjects were recruited from the local area of Palos Verdes and may or may not have been previous patients of Health Integration Therapy.

after wearing the patch – baseline mean value) and statistical power were related by the following formula. Φ[Zα + |μ-μ0| Sqrt (n)/σ] = Statistical Power

(1)

where Zα is the Z score related to the area under normal distribution curve at the desired level of significance, |μ-μ0| is effect size and σ is the standard deviation and n is sample size.

III. RESULTS Table 1 shows typical EIS readings (cellular organ function or physiologic status) for a female subject, while Table 2 shows typical EIS recordings for a male subject as examples. Functional physiological status changes in different organs from Week1 compared to baseline is designated as Δ1 and changes from Week 2 to Week 1 is shown as Δ2. Δavg shows the average value of the changes for the 2week period, and ΔT represents the average total physiologic change after 2 weeks and ΔT-base is indicative of total change at the end of the 2-week period with respect to baseline measurements. Table 3 shows the overall mean values and standard deviations for baseline and total change (ΔT) in physiologic function for each of the organs (n = 20). Table 1 Typical Electro Interstitial Scan data for a female subject. Age: 30, Weight: 125 lb, Height: 5 ft, 5 inches

Table 2 Typical Electro Interstitial Scan data for a male subject. Age: 66, Weight: 168 lb, Height: 5 ft, 11 inches

Statistical Analysis: The cellular physiological effect in different organs after 2 weeks of wearing the carnosine patch were compared to baseline data before wearing the patch using the paired t-test. A p value < 0.05 was accepted as statistically significant. Sample size (n), level of significance (α or p), effect size and (mean value of EIS reading IFMBE Proceedings Vol. 32

Nanoscale Carnosine Patches Improve Organ Function

Table 3 Summary of mean and standard deviation values for EIS readings in 10 organs, n = 20

141

The overall data in this study demonstrated that the carnosine patch worn 12 hours daily on alternate days over a period of 2 weeks produced a very significant (p < 0.01) improvement in the physiologic functional status of the pancreas, liver, right kidney, left and right adrenals, hypothalamus, pituitary and thyroid glands with an average statistical power of at least 95%. Therefore, the hypothesis that: The carnosine patch worn 12 hours/day on alternate days for two weeks, significantly improves cellular physiologic functional status in different organs was accepted as true.

REFERENCES

IV. CONCLUSIONS Statistical analyses were carried out on the data acquired from these subjects comparing the cumulative averages of the net changes in physiologic functional status of each organ at the end of the 2-week study period with the corresponding baseline data. The results showed a highly significant (p < 0.001) improvement in physiologic functional status of all organs tested except in pancreas and pituitary gland that showed a very significant improvement (p < 0.01) and left kidney and intestines that did not achieve significance. Average statistical power considering the effect size (% improvement in physiologic function, sample number, and level of significance) was at least 84% in all organs that achieved a highly significant improvement in cellular physiologic function. The average statistical power in pancreas and pituitary gland that showed a very significant improvement was at least 95%. Left kidney and intestines did not achieve significance after 2 weeks of exposure to carnosine patch. This could be attributed to placebo effect or to the fact that these organs need more exposure time to carnosine patch to significantly improve their physiologic status as a consequence of biochemical changes in their extracellular environment. Considering the fact that supplementation with carnosine or its building blocks may take 1-4 months to show a steady state effect, this level of impact is still remarkable. In the future, we plan to perform a double-blind placebo-controlled investigation to explore this topic further.

1. Gulewitsch VS, and S Amiradzibi (1990). Uber das carnosine, eine neue organische Base des Fleischextraktes. Ber. Disch. Chem. Ges. 33:1902-1904. 2. Gariballa SE, Sinclair AJ (2000) Carnosine: physiological properties and therapeutic potential. Age andAging. 29, 207-210. 3. Boldyrev AA, Formazyuk VE, Sergienko VI (1994) Biological significance of histidine-containing dipeptides with special reference to carnosine: chemistry, distribution, metabolism and medical applications. Sov. Sci. Rev. D. Physicochem. Biol. 13: 1–60. 4. Pressman AH (1997). Glutathione: The Ultimate Antioxidant. St. Martin’s Press, New York, NY. 5. Wang AM, Ma C, Xie ZH, et al (2000) Use of Carnosine as a Natural Anti-senescence Drug for Human Beings. Biochemistry (Moscow). 65 (7): 860-871. 6. Quinn PJ, Boldyrev AA, Formazyuk VE (1992) Carnosine: its properties, functions and potential therapeutic applications. Mol Aspects Med. 13(5): 379-444. 7. Hipkiss AR, Preston JE, Himsworth DT, et al (1998) Pluripotent protective effects of carnosine, a naturally occurring dipeptide. Ann N Y Acad Sci. 854:37-53 8. http://www.smart-publications.com/anti-aging/carnosine.php 9. Kyriazis M (2010). Carnosine: The new anti-aging supplement. http://www.smart-drugs.net/ias-carnosine-article.htm. 10. Van De Water JM, Miller TW, Vogel RL, et al (2003). Impedance cardiography: the next vital sign technology? Chest;123:2028-33. 11. Critchley LAH (1998). Impedance cardiography. The impact of new technology. Anaesthesia. 53:677-84. 12. Cotter G, Schachner A, Sasson L, et al (2006). Impedance cardiography revisited. Physiol Meas. 27:817-27. 13. http://www.fda.gov/cdrh/pdf/p970033.html 14. Fricke H, Morse S (1926). The electric capacity of tumors of the breast. J Cancer Res. 16:310- 376. 15. Morimoto T, Kinouchi Y, Iritani T, Kimura S et al (1990). Measurement of the electrical bio-impedance of breast tumors. Eur Surg Res. 22:86-92. The address of the corresponding author: Author: Dr Homer Nazeran PhD, CPEng (Biomed.) Institute: Department of Electrical and Computer Engineering Street: 500 West University Ave, University of Texas at El Paso City: El Paso, Texas 79968 Country: United States of America Email: [emailprotected]

IFMBE Proceedings Vol. 32

Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels J.L. Van Druff, W. Zhou, E. Asman, and J.B. Leach University of Maryland, Baltimore County, Department of Chemical and Biochemical Engineering, 1000 Hilltop Circle, Baltimore, MD, USA Abstract— Our group has previously published work wherein microparticles with bound oxygen-sensitive and oxygen-insensitive lumiphores proved an accurate, precise and reliable tool for the quantification of localized oxygen partial pressure in vitro. Calibration between the luminescence of oxygen-sensitive lumiphore and local oxygen partial pressure allows for oxygen quantification while the luminescence of oxygen-insensitive lumiphore allows for corrections based on particle concentration. An analogous system may prove to be an equally useful tool for in vivo measurements, if certain design features are altered to address concerns such as tissue optical absorptivity and possible toxicity. Current studies focus on the design of a surface functionalized nanospheresystem as a possible approach. This work focuses on the development of this sensing technology as well as methods to allow for precise and flexible synthesis, characterization of key properties (e.g., oxygen sensing in whole blood) and optimization for in vivo conditions. Keywords— Imaging, Sensors, Nanotechnology, Oxygen, Cancer, Magnetic Nanoparticles.

I. INTRODUCTION A. Background Localized hypoxia is known to correlate with tumor radiation resistance, angiogenesis, and metastasis. Consequently, several non-invasive methods of mapping oxygen concentration in vivo can be found in the literature. Two well-studied methods are oxygen-dependent quenching of lumiphores and electron paramagnetic resonance (EPR). Oxygen content has been successfully mapped with EPR with limited spatial resolution (~3 mm) [1], [2]. Quenching-based techniques can provide higher resolution, however the efficacy of such systems is contingent upon the depth to which the excitation and emission wavelengths can penetrate tissue. Ruthenium-based lumiphores (λEx.: 448 nm, λEm.: 603 nm) have been utilized to acquire in vivo measurements up to a depth of ~1 mm [3]. This depth limitation is due to the fact that the excitation wavelength is within the range of the visible spectrum wherein tissue absorbs very strongly.

A class of lumiphores which show great promise for in vivo oxygen measurement are palladium tetraaryl tetrabenzoporphyrins (Pd-Ar4TBPs; Fig. 1)

Fig. 1 Carboxy-functionalized Pd-Ar4TBP The Vinogradov group has published numerous papers wherein Pd-Ar4TBPs were successfully used to quantify oxygen at depths in the centimeter-range [4], [5], [6]. Such measurements are possible because Pd-Ar4TBPs both absorb and emit at wavelengths (λEx.:636 nm, λEm.:795 nm) greater than the absorptive maximum for tissue (~540 nm). Two problems related to the use of Pd-Ar4TBPs in vivo are the innate hydrophobicity of the molecule as the singlet oxygen (1O2) produced during the quenching process. 1O2 is a transient, high-energy toxin. This species is toxic enough that it has been used to selectively kill cells in a process known as photodynamic therapy [7]. Our group has previously published work on a novel multi-lumiphore microparticle system for quantification of oxygen in vitro [8]. This system consists of ruthenium dye and Nile Blue concomitantly bound to microparticles via ionic interactions. Ruthenium is quenchable by oxygen whereas Nile Blue is not. The strength of this system lies in the fact that the ruthenium can quantify oxygen via the Demas-Stern-Volmer relation [9] and Nile Blue allows for ratiometric corrections for variations in particle concentration.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 142–145, 2010. www.springerlink.com

Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels I0 = I

1 f1 1 + K SV1 * pO 2

+ f2

(1)

The Demas-Stern-Volmer Equation. Here, I0 is the intensity at pO2=0, I is the intensity observed and KSV is the SternVolmer constant. f1 and f2 denote fractions of lumiphore population which are accessible and inaccessible to quenching, respectively. It is reasonable to assume that a system analogous to our microparticle system could prove an equally useful tool for in vivo oxygen quantification, provided certain design elements are altered. B. Design The design of our new particle-based system was contrived with biological conditions and concerns in mind. To maximize excitation and emission transmission through an organism, the lumiphore utilized to detect oxygen will be the carboxy-functionalized Pd-Ar4TBP (Fig. 1). While commercially unavailable, a synthesis scheme exists in the literature. The oxygen-insensitive dye will be carboxyfunctionalized DyLight747 (Thermo-Pierce) (λEx.:745 nm, λEm.:805 nm). The lumiphores are covalently linked to aminefunctionalized magnetic nanoparticles (TurboBeads) via an amide bond. The amide bond was chosen due to its low dissociation under nearly all biological conditions (thus minimizing the likelihood of introducing free lumiphore into the body). The use of magnetic nanoparticles provides two additional benefits for in vivo use. First, the magnetic aspect of the particles could be used to non-invasively manipulate local particle concentration in vivo or remove the particles from the body. Second, a hydrophilic, biologically inert polymer, such as polyethylene glycol (PEG) could be concomitantly bound to the particles along with the lumiphores. This would add to the hydrophilicity of the system and could protect biomolecules from 1O2 by forming a steric “shield”.

II. METHODS AND RESULTS A. Ar4TBP Synthesis A synthesis scheme for the Pd-Ar4TBP in Fig. 1 can be found in the literature [10]. The two primary precursors for this scheme are dimethyl 1,2,3,6-tetrahydrophthalate (“phthalic ester”) and t-butyl isocyanoacetate. Phthalic ester was synthesized from 1,2,3,6-tetrahydrophthalate anhydride in an acid-catalyzed esterification reaction. T-butyl

143

isocyanoacetate was synthesized from t-butyl chloroacetate and formamide following the published procedure [11]. The phthalic ester was reacted with t-butyl isocyanoacetate to form a pyrrole ester (Fig. 2)

Fig. 2 Pyrrole ester used in this synthesis The pyrrole ester was then reacted with benzaldehyde under Alder-Longo conditions to form tetraaryl tetracyclohexanylporphyrin (Ar4TCHP; Fig. 3)

Fig. 3 Ar4TCHP The Ar4TCHP was then metallated with zinc acetate and the cyclohexanyl groups were aromatized to benzyl groups via oxidation with dichloro dicyano benzoquinone (DDQ). The zinc chelate was then de-metallated with TFA to yield a free-base capable of forming a Pd chelate. Analysis of the Ar4TCHP and Ar4TBP products was complicated by three phenomena. First, both free-bases can form dications when exposed to oxidative conditions (such as DDQ). Dications cannot be aromatized, but can be remetallated. Second, Zn-Ar4TCHP and Zn-Ar4TBP are labile and spontaneously decay to free-base. Third, the direct product of the Alder-Longo reaction alternates between free-base and dication and contains numerous contaminants. Isolation of the direct product is possible but requires numerous rounds of chromatography and recrystallization, which would lead to a loss of product. These separation issues made the prospect of NMR analysis somewhat unfavorable. Luckily, Ar4TCHP and Ar4TBP as well as their corresponding dications and metal chelates have distinct UV-Vis peaks [12].

IFMBE Proceedings Vol. 32

144

J.L. Van Druff et al.

To analyze the identity of the product of the reaction where Ar4TBP free-base is formed from Zn-Ar4TBP, two rounds of chromatography (eluent: 12:1:1 methylene chloride:THF:acetic acid) were employed. Fractions from the second round were analyzed via UV-Vis spectroscopy. The fraction containing the product was then twice recrystallized with hexane. These UV-Vis data are presented in Fig. 4.

Fig. 5 Carboxy-functionalized Pd-TPP

Fig. 4 a) UV-Vis scan of first fraction of second round of chromatography b) 4th fraction c) 8th fraction d) 8th fraction after re-crystallization. 507 nm corresponds to Ar4TBP dication. 450 nm corresponds to Ar4TCHP dication

From these data, it appears as if both Ar4TCHP and Ar4TBP are present. This indicates incomplete aromatization. There are two likely explanations for the incomplete aromatization. First, the labile Zn-Ar4TCHP may have demetallated during the DDQ reaction, leaving oxidatively inert Ar4TCHP free-bases. Second, the Alder-Longo reaction results in chlorins (porphyrins wherein the pi-bond system is not fully conjugated) and the conversion of chlorin to porphyrin consumed a portion of the DDQ. B. Surface Reaction Development In order to produce our final product, a conjugation reaction capable of binding both carboxy Pd-Ar4TBP and DyLight 747 to the amine-functionalized nanoparticles. The method we chose was EDC/NHS-mediated coupling [13]. Since this work was performed while Pd-Ar4TBP was still being synthesized, another species, Pd-tetraphenyl porphrine (Pd-TPP; Fig. 5) was used as a substitute.

It was assumed that, because of the similarities between Pd-TPP and Pd-Ar4TBP, that a conjugation reaction that works for Pd-TPP would likely work for Pd-Ar4TBP as well. The reaction took place initially in PBS at pH 7.2. 10x molar excess EDC and NHS were used so that the ratio of Pd-TPP to DyLight747 bound to the particles would depend on the amounts of dye added and not catalyst concentration. The nanospheres were vortexed for 20 minutes prior to reaction. During the course of the reaction, the reaction vessel was continuously vortexed. To quantify the amount of dye bound to the nanoparticles, we developed a spectrophotometry-based method. Equal concentrations of EDC, NHS, lumiphore and buffer were added to two tubes. An equal amount of nanoparticles was added to each tube. 106 molar excess tris was also added to the reaction volumes. One tube received tris before nanoparticle addition and the other received tris 30 minutes after nanoparticle addition. Since tris has a primary amine, adding tris before nanoparticles should result in the vast majority of lumiphore being bound to tris instead of nanosphere. At the end of the reaction, the nanospheres were pelleted with a rare earth magnet, resulting in the bound lumiphore being in the pellet and the tris-bound lumiphore being in the supernatant. The absorbance of the lumiphore in the supernatant was then quantified spectrophotometricly. The ratio Abs(ttris=30 min) over Abs(ttris=0 min) is equal to the % unreacted.

IFMBE Proceedings Vol. 32

Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels

145

B. Surface Reaction Development The current data indicate that our reaction scheme is capable of binding both Pd-TPP and DyLight747 to the nanoparticles individually. The next step would be to confirm that our reaction conditions work with carboxy-functionalized PdAr4TBP or adjust the reaction conditions if required.

ACKNOWLEDGMENTS

Fig. 6 Normalized absorbance for DyLight 747 for samples where tris was added before nanoparticles and 30 minutes after nanoparticles (P T

(7)

The model is time dependent, with a critical time denoted as (T) – the time at which proliferaation ends. Equation (5) represents the number of CTLs forrmed by cytotoxic T-cell expansion (t < T). T is set to seven since the CTL population peaks at day seven [14]]. Equation (6) describes the contraction event during whichh CTL numbers decrease (t > T). Equation (7) represents the number of memory cells in the system after the expansionn phase; r is the rate at which activated cytotoxic T-cells bbecome memory cells (t ≥T). In previous publications memoory cell apoptosis ( ) is set to zero [14]. In this study the deeath rate equals the death rates of all cytotoxic T-cells annd their derivatives to facilitate a greater biological unnderstanding. De Boer's model does not account for nonzerro steady-state value M∞ of the memory cells. This can be acccomplished by adding a constant term λA in equation (6).

IFMBE Proceedings Vol. 32

Mathematical Modeling of Ebola Virus Dynamics as a Step towards Rational Vaccine Design

B. Parameter Estimation The values for the initial number of naïve T-cells were preserved as were the natural death rates. Hence A0 = 0.019 and δ = 0.0139. Parameters ρ and r must be revealed by the repeated simulations of the model. There are no memory cells at time zero. C. Results Cytotoxic T-cell expansion peaks at day seven with memory cell formation when ρ = 1.2 and r = 1 (Fig. 3). At these values the system shows biological profiles for seven day T-cell expansion (5000 fold) and 95% contraction. CTL derived memory cells stabilize at around 5% of the peak value of CTLs. The model reflects a twenty-one day period and this is concurrent with biological surveillance of the CTL immune response. This is the type of response the vaccine must trigger in order to generate a viable population of CTLs towards the Ebola virus.

199

infected cells, hence the term (pM) [14]. During proliferation some memory cells become CTL effectors at a rate of r, and some die naturally at rate δM.. The other modification involves equation (4)’ for the number of effector CTLs in the system (z). As before, (z) increases when T-cells are stimulated by infected cells at a rate k and die naturally at rate δ. Yet, the presence of memory cells at the beginning of the infection impacts the value of (z). Some memory cells proliferate into CTL effectors (z) and the rate of formation of these cells is represented as the term (rM). Once activated the CTLs also go through proliferation (pz) at rate p, which is presumably equal to the rate of proliferation of memory cells above. A. Parameter Estimation Parameters values from the previous simulations were preserved. Values p and r are unknown and must be revealed by multiple simulations of the model. z0 is no longer initialized at zero, but by the output of the De Boer system for memory cell production (section IV). The magnitude of initial memory cells, M, dictates the strength of the immune response. B. Results

Fig. 3 Cytotoxic T-cell Expansion and Memory Cell Production A(t) = Expanding CTLs; A’(t) = Contracting CTLs; M(t) = Memory T-cells

The rates of proliferation (p) and conversion of memory cells into CTLS (r) were kept low to maintain the integrity of the model; 0.1 and 0.05 respectively. In this system the EBOV is contained (Fig. 4) as indicated by the low concentration of free virus as compared to Fig. 1. The model also shows the established biological profile for the CD8+ T-cell response throughout the course of infection: expansion, contraction, and memory cell stabilization.

V. EBOLA DYNAMICS IN THE VACCINATED SYSTEM In the absence of a vaccine, the naïve T-cell response was unable to rescue the system (section III). The vaccinated system is thus expressed as a revision of the Tuckwell model with the consideration that the initial number of cytotoxic T-cells (zо) is the critical factor that determines the magnitude of the immune response. During a challenge with the EBOV, both circulating vaccine-induced memory and naïve T-cells will respond to the virus. The revised model is as follows: (4)’

Fig. 4 Ebola Challenge in the Vaccinated System x(t) = uninfected cells; y(t) = infected cells; v(t) = free virus particles, z(t) = cytotoxic T-cells, and M(t) = memory T-cells

(8) The primary modification is equation (8) for the memory cell population. Memory cells proliferate after contact with

VI. DISCUSSION The series of mathematical models presented in this study assessed the virus centered CD8+ T-cell response

IFMBE Proceedings Vol. 32

200

S. Banton, Z. Roth, and M. Pavlovic

towards the EBOV. Each model provided insight into how the immune system works, while revealing thresholds for critical parameters. The vaccinated CTL response contained the EBOV’s growth, despite its extreme virulence. The involvement of memory T-cells was successful, but not without limitations. There was still cellular damage to the system, which could not be eliminated with restrictions on biological rates and conditions. The limitations are of two types: memory T-cell ability, and the isolation of the CD8+ response. In principle, memory T-cells circulate through the entire body and are not immediately available to combat the virus. These cells travel and sometimes reside in various tissues [13]. T-cells are not activated by free viruses and require contact with infected cells instead. This contact can only occur after the EBOV has established an infection. Additionally, this study focused solely on the CD8+ T-cell response. There are other branches of the immune system to explore, such as the humoral response which is more suitable for removing freely circulating viruses, a need established by the models presented in this study. Nonetheless, the methodology outlined solidifies the use of mathematical models for establishing the specifications of a rationally designed Ebola vaccine.

2. Bente D, Gren J, Strong JE, et al. (2009) Disease modeling for Ebola and Marburg viruses. Disease models & mechanisms 2(1-2):12-17 3. Terando A, Faries M, Morton D (2007) Vaccine therapy for melanoma: Current status and future directions. Vaccine 25:B4-B16 4. Callard R, Hodgkin P. (2007) Modeling T- and B-cell growth and differentiation. Immunological Reviews 216(1):119-129 5. Tuckwell H (2003) Viral population growth models. University of Paris, Paris 6. Titenko A, Andaev EI, Borisova TI (1992) Ebola virus reproduction in cell cultures. Vopr Virusol. 37(2):110-113 7. Nowak, MA, May, RM (2000). Virus Dynamics. Cambridge University Press, Cambridge UK. 8. Tortura, GJ, Funke BR, Case, C. Microbiology: an Introduction 9th edition. Benjamin Cummings 9. J. E. Berrington, D. B. A. C. F. A. J. C. G. P. S. (2005). "Lymphocyte subsets in term and significantly preterm UK infants in the first year of life analysed by single platform flow cytometry." Clinical & Experimental Immunology 140(2): 289-292. 10. Komanduri, Krishna V.; McCune, Joseph M. Komanduri, Krishna V.; McCune, Joseph M. Volume 344(3), 18 January 2001, pp 231-232 11. Roger, P.-M., J. Durant, et al. (2003). "Apoptosis and proliferation kinetics of T cells in patients having experienced antiretroviral treatment interruptions." J. Antimicrob. Chemother. 52(2): 269-275. 12. Kessel A, Rosner I, Rozenbaum M et al. Increased CD8+ T Cell Apoptosis in Scleroderma Is Associated with Low Levels of NF-κB. Journal of Clinical Immunology. 2004 January;24(1):30-36 13. Luu RA, Gurnani K, Dudani R et al. (2006) Delayed Expansion and Contraction of CD8+ T Cell Response during Infection with Virulent Salmonella typhimurium. J Immunol. 177(3):1516-1525 14. Kohler B. (2007) Mathematically modeling dynamics of T-cell responses: Predictions concerning the generation of memory cells. Journal of Theoretical Biology. 245(4):669-676.

REFERENCES 1. Oswald WB, Geisbert TW, Davis KJ et al. (2007) Neutralizing antibody fails to impact the course of Ebola virus infection in monkeys. PLoS Pathogens 3(1)

IFMBE Proceedings Vol. 32

Respiratory Impedance Values in Adults Are Relatively Insensitive to Mead Model Lung Compliance and Chest Wall Compliance Parameters Bill Diong1, Michael D. Goldman2, and Homer Nazeran2 2

1 Engineering, Texas Christian University, Fort Worth, TX, TX, U.S.A. Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, U.S.A.

Abstract— Impulse Oscillometry (IOS) measures respiratory resistance and reactance from 5 to 35 Hz. These data were obtained from 2 groups of adults enrolled in a study of IOS compared to other lung function testing methods: 1 group of 10 adults with no identifiable respiratory disease and 1 group of 10 adults with varying degrees of COPD. We used Mead’s model of the respiratory system to derive parameter estimates of central inertance (I), central and peripheral resistances (Rc, Rp), and lung, chest wall, bronchial, and extrathoracic compliances (Cl, Cw, Cb, Ce) by least-squares-optimal fitting to the IOS data. This procedure typically produced multiple optimal solutions, with estimates of Cl and of Cw that varied by 2 to 3 orders of magnitude and were several orders of magnitude larger than expected physiological values, up to 8.6x105 L/kPa for Cl and 2.6x105 L/kPa for Cw. We then performed constrained optimization of normal adult data with both Cl and Cw parameters fixed at 2 L/kPa, which produced a groupaveraged LS error that was 19.3% larger than for unconstrained optimization: Rc, I, Rp, Cb and Ce parameters changed by 0.99%, 1.76%, 22.0%, 11.9% and 10.6%, respectively. Constrained optimization of the COPD adults data with the Cw fixed at 2 L/kPa and Cl fixed first at 1.5 L/kPa and then at 1.1 L/kPa produced group-averaged LS errors that were 23.8% larger and 23.6% larger, respectively, than for unconstrained optimization: Rc, I, Rp, Cb and Ce parameters changed by 2.12%, 4.88%, 18.5%, 6.46% and 25.5%, respectively, for Cl = 1.5 L/kPa; they changed by 1.64%, 4.30%, 18.4%, 6.64% and 18.5%, respectively, for Cl = 1.1 L/kPa, all relative to the unconstrained case. We conclude that the Mead model’s impedance and its parameter estimates for normal and COPD adults are relatively insensitive to the Cl and Cw parameters. Keywords— Respiratory impedance, respiratory system model, parameter estimation, impulse oscillometry, COPD.

I. INTRODUCTION Studies on a better method to assess human lung function have been continuing, since the existing standard lung function test of spirometry requires subjects to inhale and exhale with maximum effort, which may be troublesome especially to the elderly and children, leading to unreliable results. One alternative to spirometry is the method of forced oscillation [1], and the Impulse Oscillometry System (IOS) [2] in particular, which requires only the subject’s passive

cooperation. This method allows them to breathe normally, with a nose clip to close the nares. Brief 40ms electrical pulses, producing 60-70 ms mechanical displacements of the speaker cone, result in pressure waves from the mouth inwards being superimposed on normal respiratory airflow into the lungs. Both the pressure stimulus and the resulting airflow response are recorded to provide information about the respiratory system’s forced oscillatory impedance that can be used to detect and diagnose respiratory diseases. The resistive and reactive (ZR and ZX) impedance values that are calculated depend on the respiratory system’s ‘mechanical’ resistances, compliances and inertances, so they can also be correlated with models consisting of electrical components that are analogous to those ‘mechanical’ components. Then parameter estimates for such models may provide an improved means of detecting and diagnosing respiratory diseases. Recently, studies have been conducted to compare the relative merits of several models of varying complexity and the 7-element Mead model (see Fig. 1) was typically found to yield the lowest error [3, 4]. However, other issues besides minimizing the error in curve fitting must be considered. Specifically, the Mead model usually yielded unphysiologically large values as the optimal estimations of the lung and chest wall capacitances (Cl and Cw), the majority of those values being several orders of magnitude larger than the expected range of values. Moreover, the estimation results for the Mead model typically produced least-squaresoptimized estimates of Cl and also of Cw that varied by 2 to 3 orders of magnitude, i.e., multiple optimal solutions were produced.

Fig. 1 Seven-element Mead model

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 201–203, 2010. www.springerlink.com

202

B. Diong, M.D. Goldman, and H. Nazeran

II. MATERIALS AND METHODS IOS measurements were obtained from 2 groups of randomly selected adults enrolled in a study of IOS compared to other lung function testing methods: 1 group of 10 adults with no identifiable respiratory disease and 1 group of 10 adults with varying degrees of COPD assessed by history and conventional spirometry. We used Mead’s model of the human respiratory system to derive parameter estimates of central inertance (I), central and peripheral resistances (Rc, Rp), and lung, chest wall, bronchial, and extrathoracic compliances (Cl, Cw, Cb, Ce) by least-squares-optimal unconstrained fitting to the IOS resistive and reactive impedance values at 5, 10, 15, 20, 25 and 35 Hz. The procedure used to derive these estimates was as described in [3, 4] and is not repeated here. We then performed constrained optimization of the normal adults data with both Cl and Cw parameters fixed at 2 L/kPa; these values being generally accepted to be the reference values for lung and chest wall compliances in normal adults [5]. Then constrained optimization of the COPD adults data with the Cw fixed at 2 L/kPa and Cl fixed at 1.5 L/kPa was performed; these values approximating the dynamic compliances at the breathing rate of 15 breaths per minute in mild-to-moderate COPD [5]. Finally, constrained optimization of the COPD adults data with the Cw fixed at 2 L/kPa and Cl fixed at 1.1 L/kPa was performed; these values representing moderate-to-severe COPD [5].

III. RESULTS The least-squares-optimal unconstrained fitting of the Mead model to the normal adults’ IOS data typically produced multiple optimal estimates of Cl and also of Cw that varied by 2 to 3 orders of magnitude, while the estimates of the remaining parameters manifested very small or no differences. Arbitrarily choosing one of these optimal solutions to represent “the” Mead model corresponding to the IOS test data being fitted resulted in values of Cl from 0.22974 to 855000 (mean 97353, SD 183666) L/kPa, and values of Cw between 0.57277 and 262000 (mean 47322, SD 81592) L/kPa. Furthermore, the minimal values of optimal Cl were mostly not paired up with the minimal values of optimal Cw. The least-squares-optimal unconstrained fitting of the Mead model to the COPD adults’ IOS data also typically produced multiple optimal estimates of Cl and of Cw that varied by 2 to 3 orders of magnitude, while the estimates of the remaining parameters again manifested little or no differences. Arbitrarily selecting one of these optimal solutions to represent “the” Mead model corresponding to the test data being fitted resulted in Cl between 1.2 and 172000

(mean 36271, SD 40501) L/kPa, and in Cw from 0.24 to 23000 (mean 5882, SD 8589) L/kPa. The minimal values of optimal Cl were again mostly not paired up with the minimal values of optimal Cw. Constrained optimization of the normal adults data with both Cl and Cw parameters fixed at 2 L/kPa produced a group-averaged LS error that was 19.3% larger than for unconstrained optimization. Rc, I, Rp, Cb and Ce parameters changed by a group-average of 0.99%, 1.76%, 22.0%, 11.9% and 10.6%, respectively, relative to their unconstrained values. Constrained optimization of the COPD adults data with Cw fixed at 2 L/kPa and Cl fixed at 1.5 L/kPa produced group-averaged LS errors 23.8% larger than for unconstrained optimization. Rc, I, Rp, Cb and Ce parameters changed by a group-average of 2.12%, 4.88%, 18.5%, 6.46% and 25.5%, respectively; these changes being relative to the unconstrained case. Finally, constrained optimization of the COPD adults data with Cw fixed at 2 L/kPa and Cl fixed at 1.1 L/kPa produced group-averaged LS errors 23.6% larger than for unconstrained optimization. Rc, I, Rp, Cb and Ce parameters changed by 1.64%, 4.30%, 18.4%, 6.64% and 18.5%, respectively, relative to their unconstrained values.

IV. CONCLUSIONS The 7-element Mead model typically fits IOS respiratory impedance data from normal and COPD adults with lower least-squares error than other commonly used models. But estimation of its parameters also usually yields multiple optimal solutions with estimates of the lung and chest wall compliances (Cl and Cw) that varied by 2 to 3 orders of magnitude. Moreover, these least-squares-optimized estimates of Cl and also of Cw were several orders of magnitude larger than the expected physiological values. This study has enabled us to conclude that the respiratory impedance values produced by the Mead model are relatively insensitive to its lung compliance and chest wall compliance parameters. In addition, the other Mead model’s parameter estimates derived from IOS data in normal and COPD adults are relatively insensitive to that model’s lung compliance and chest wall compliance parameters. In particular, the large airway parameters of resistance and inertance change in value by an average of less than 5% even when these compliance parameter values are changed by a few orders of magnitude. The small airway parameters change a little more; the peripheral airway resistance changes in value (on average) by less than 22%, while the bronchial compliance changes in value (on average) by less than 12%. We suggest that the small-pressure-small-volume

IFMBE Proceedings Vol. 32

Respiratory Impedance Values in Adults Are Relatively Insensitive to Mead Model Lung Compliance

perturbations produced by IOS are not likely transmitted beyond airways smaller than 2 mm in diameter, consistent with direct measurements of pressures reported by Macklem and Mead [6]. This leads to trivial pressures applied to the lung and chest wall, and accordingly, their compliances are not significant factors in model-derived calculations of the remaining Mead model parameters.

REFERENCES 1. DuBois AB, Brody AW, Lewis DH, Burgess BF (1956) Oscillation mechanics of lungs and chest in man. J. Appl. Physiol. 8:587-594 2. VIASYS MasterScreen IOS. VIASYS/Jaeger, Yorba Linda CA, USA 3. Diong B, Rajagiri A, Goldman M, Nazeran H (2009) The augmented RIC model of the human respiratory system. Med Biol Eng Comput 47:395–404

203

4. Diong B, Nazeran H, Nava P, Goldman M (2007) Modeling human respiratory impedance. IEEE Engineering in Medicine and Biology Society Magazine: Special Issue on Respiratory Sound Analysis 26:48–55 5. Mead J (1969) Contribution of compliance of airways to frequencydependent behavior of lungs. J. Appl. Physiol. 26(5):670-673 6. Macklem P, Mead J (1967) Resistance of central and peripheral airways measured by a retrograde catheter. J. Appl. Physiol. 22(3): 395401 Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Bill Diong Texas Christian University 2840 W. Bowie St. Fort Worth U.S.A. [emailprotected]

A Systems Biology Model of Alzheimer’s Disease Incorporating Spatial-temporal Distribution of Beta Amyloid C.R. Kyrtsos1,2 and J.S. Baras1,2,3 1

Fischell Department of Bioengineering, University of Maryland, College Park, MD, US 2 Institute for Systems Research, University of Maryland, College Park, MD, US 3 Department of Electrical Engineering, University of Maryland, College Park, MD, US

Abstract–– Alzheimer’s disease (AD) is one of the most devastating neurological disorders that affects the elderly. Pathological characteristics at the tissue and cell level include loss of synapses and neurons in the hippocampal and cortical regions, a significant inflammatory response, and deposition of the beta amyloid (Aβ) protein in brain parenchyma and within the basem*nt membrane of cerebral blood vessels. These physical changes are believed to lead to gradual memory loss, changes in personality, depression and loss of control of voluntary muscle movement. Currently, 1 in 8 individuals over age 65 are affected by AD; this translates to over 5 million afflicted individuals in the US alone. Aβ has long been implicated as the main culprit in AD pathogenesis, though cholesterol, apolipoprotein E (apoE) and the low density lipoprotein-related receptor protein (LRP-1) are now also believed to play a role. In this paper, we describe a spatialtemporal mathematical model that has been developed to study the interactions between cholesterol, Aβ and LRP-1. Models for neuron survival, synapse formation and maintenance, and microglial motion have also been discussed. The paper concludes with a description of the proposed algorithm that we will use to simulate this complex system. Keywords–– Alzheimer’s disease, cholesterol, LRP-1, apoE, math modeling.

I. INTRODUCTION Lipid metabolism, particularly the processing of cholesterol, and the expression level of LRP-1, have come to the forefront of AD research in the past decade. Recent studies have demonstrated that cholesterol levels help to regulate the generation and clearance of Aβ [9, 13, 19]. The LRP-1 receptor, located at the blood-brain barrier (BBB) and on the neuronal plasma membrane, is responsible for clearance of Aβ from the brain and transport of cholesterol into the neuron, respectively. Previous research has shown that the expression of LRP-1 at the BBB decreases with age [6]. This decrease in LRP-1 has been implicated in the buildup of Aβ in the brain and breakdown of the neurovascular unit during aging, possibly leading to AD [23]. There is, however, a disagreement in the literature about whether high or low cholesterol contributes to AD pathogenesis.

Decreased brain cholesterol has been noted by several studies. A recent study by Liu et al demonstrated that increasing the level of APP (Amyloid Precursor Protein), particularly the γ-secretase cleavage product AICD, led to a decrease in LRP-1 expression levels, an increase in apoE levels and a decrease in cholesterol [15]. Further studies have shown that decreased brain levels of cholesterol are found in both apoE4 knock-in mice and in AD brains [10, 14]. The Framingham study which tracked 1894 individuals over the course of 16-18 years found that low or normal levels of cholesterol were correlated with lower cognitive performance levels [7]. Conversely, high cholesterol has also been shown to play a possible role in AD pathogenesis [19]. Since high cholesterol is believed to lead to an increased plaque load and subsequent neurodegeneration, several studies have looked at the effects of statins on AD pathogenesis. Fassbender et al studied the effect of simvastatin and lovastatin on primary neurons and found that this led to decreased levels of Aβ40 and Aβ42 [8]. Refolo et al expanded on this by studying the effects of the cholesterollowering drug BM15.766 on transgenic mice expressing an AD phenotype and saw that plasma cholesterol levels, brain Aβ peptides and Aβ load were all decreased [20]. One of the most interesting recent studies clearly demonstrates that reducing the levels of brain cholesterol may not prevent AD pathogenesis, as has been suggested by previous studies. In this study, Halford and Russell crossed transgenic AD mice with cholesterol 24-hydroxylase knockout mice, and found that Aβ plaque deposition did not vary statistically between the mutant and AD control strains [9]. The exact relationship between APP, Aβ and cholesterol processing, and the effects that low brain cholesterol and LRP-1 expression levels have on neurodegeneration in AD is currently not well understood. The goal of this paper is to develop basic mathematical framework to study how the molecules may interact with each other during the initiating phase of AD pathogenesis. By using a systems biology approach to studying these various interactions and having the ability to precisely alter the expression of individual proteins of interest, we will be able to study the effect of

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 204–208, 2010. www.springerlink.com

A Systems Biology Model of Alzheimer’s Diseasee Incorporating Spatial-temporal Distribution of Beta Amyloidd

minor perturbations on the system as a wholee. This provides direction for future experimental research.

II. LOCAL NETWORK FOR Aβ & CHOLESTEROOL PROCESSING Beta amyloid (Aβ) is considered to be the t key protein and causative factor in AD due to its increased concentration in the brain and presence as the t core protein in amyloid plaque deposits [21]. Previou us studies have shown that cleavage of the amyloid precursor protein (APP) by β-secretase and followed by γ-secretase leads l to the 3942 amino acid length amyloidogenic form ms of Aβ. The majority of Aβ is believed to be produced by y neurons in the brain; very little has been shown to cross fro om the blood to the brain via the BBB. Alternative splicing g of APP by αsecretase leads to generation of sAPPα, a no on-toxic product believed to play a role in neuronal excitabiliity and enhance synaptic plasticity, learning and memory [17 7]. The role of cholesterol in Aβ processing is just starting g to be studied, though a clear understanding has not yet been n reached.

205

decreases in hippocampal volume. Significant deposition in the leptomeningeal vessels in a proocess known as cerebral amyloid angiopathy (CAA) is also observed. For this reason, our model focuses on the ddistribution of Aβ within the hippocampus. The decision to use a statistical orr deterministic model was determined by calculating the numbeer of Aβ molecules. The volume of the hippocampus in a heaalthy adolescent is ~5.8-6 cm3 [16]. The endpoint concentratioon levels of total brain Aβ of both healthy and AD individuals haas previously been studied and found to be 0.6 µM and 8.8 µM M, respectively [5]. This corresponds respectively to a total of 2.2x1015 and 3.2x1016 Aβ molecules in a healthy and an AD bbrain. Thus, looking at a single neuron or a small number of neeurons would indicate that a stochastic process should be used to model Aβ distribution. However, since we are interested in studying much larger portions of the hippocampus containiing hundreds to thousands of neurons in our simulations, the num mber of molecules that we would be dealing with would be on the order of >1014, which cannot easily or efficiently be modeleed stochastically. We can overcome this difficulty by using a reeaction-diffusion equation (RDE) to model our system:

where c is the concentration of beeta amyloid, DAB is the diffusion coefficient of Aβ and Ri represents the reactions that occur within the control volumee. The reaction term can be expanded to account for prodduction, degradation by proteases (specifically, degradation by the insulin-degrading protease, IDE), fibril formation annd uptake by microglia, defined respectively as: ,

, ,

,

Ψ Fig. 1 Cholesterol and Aβ processing in thee brain In the brain, cholesterol is generated fro om one of two pathways: de novo synthesis, or by uptaake from brain lipoproteins [1, 4, 18]. As we age, de novvo synthesis by neurons decreases to a trivial level, and the t majority of cholesterol is synthesized by astrocytes an nd delivered to neurons via apoE [1]. Figure 1 shows Aβ and cholesterol processing, as well as their interrelationship ps between the two pathways.

III. DERIVATION OF THE RDE FOR AΒ DISSTRIBUTION The hippocampus is the initial and most affected region of the brain, with severe neuronal loss leadin ng to significant

Beta amyloid production occurs onlly in the cell body of the neuron, whose location is represennted by a delta function. The Aβ production rate is given by β, a Poisson distribution whose mean depends on several facctors: ,

,

,

1

The production rate is also dependent on the general stress level, the extent of inflammation, and whether or not the neurons in the local environmeent are re-modeling or recovering from a recent insult. Protease degradation is modeled at the macroscopic leveel and depends on the reaction rate (α), as well as the cooncentrations of Aβ and IDE. A simplified model of fibril fformation is given where γ1,2 represent the respective fibril attachment rates for Aβ onto a fibril and for two monoomers coming together.

IFMBE Proceedings Vol. 32

206

C.R. Kyrtsos and J.S. Baras

Finally, the rate at which microglia uptake Aβ is given by ε, and is dependent both on the Aβ concentration as well as the location of the microglia, which, like neurons, has been modeled using a delta function. The flux of Aβ across the blood-brain barrier is very important to the overall dynamics of the model. The majority of clearance of Aβ from the brain occurs by transport of apolipoproteinE-bound Aβ via LRP1 receptors. Mathematically, the net flux can be modeled as the sum of the passive and active transport. Passive transport can be modeled using simplified Kedem-Katchalsky equations that account for leakage across a semi-permeable BBB. This state only occurs in later stages of AD when cerebrovascular dysfunction leads to local breakdown of the BBB due to either plaque deposition (CAA) or inflammation of the neurovascular unit. For our modeling purposes, only the active transport has a non-trivial contribution to the early pathogenic stages and the rate of reaction is modeled using Michaelis-Menten kinetics: ,

R

,

where Rmax is the maximum rate of reaction, KM is the Michealis constant and c represents the concentration of Aβ within a narrow boundary region around the BBB. The net reaction rate is also dependent upon the rate at which Aβ binds to apoE to be transported and on the density of LRP1 receptors along the BBB: 1,

,

For our model, the role of LRP1 to the reaction rate will be written as a ratiometric coefficient, L, where L ranges from 0 to 1 as follows:

appropriate to assume that this will be the form localized near LRP1 receptors.

IV. DERIVATION OF AΒ DIFFUSION COEFFICIENT The diffusion coefficient for Aβ (DAB) moving through brain tissue is not a readily available parameter due to the difficulty in obtaining accurate measurements. To overcome this, DAB was calculated using a combination of the Stokes-Einstein equation and a previously described method [12]. The effective diffusion coefficient through brain can be given by:

where D is the theoretical value for the diffusion coefficient given by the Stokes-Einstein relationship in a fluid medium free of any obstacles and λ is the tortuosity, or the average hindrance of a complex medium relative to an obstacle-free medium. In the brain, λ is typically ~1.6, though this value can increase during insult or stress to the brain, decreasing the effective diffusion. The Stokes-Einstein relationship is: 6 where kB is the Boltzmann constant (1.38e-23 J/K), T is the temperature in Kelvin (310.15 K), η is the effective viscosity (0.7-1 mPa·s, [2]), and r is the effective radius of Aβ (estimated as 2 nm, [11]). Substituting these values into the given equations gives an effective diffusion coefficient of DAB ~1.14x10-6 cm2/s.

1 1

V. MICROGLIA MODEL

The rate at which apoE and Aβ bind is defined macroscopically as: Aβ

σ Aβ apoE

The total flux of Aβ across the BBB is described by the reaction rate (Rs) and the total cross-sectional area of BBB that we are studying: Ψ

R A

Experimental values for the Michaelis constant and the Rmax for Aβ40 were derived by Shibata et al to be 15.3 nM and 70-100 nM, respectively [22]. Values for Aβ42 are somewhat unnecessary since LRP-1 predominantly transports the 40 amino acid length protein as opposed to the 42 amino acid form. Additionally, Aβ40 is the form found in cerebrovascular plaques, so it is much more

The immune response of the central nervous system (CNS) is separate from that of the rest of the body, in part due to the presence of the blood-brain barrier (BBB). CNS macrophages, known as microglia, are distributed relatively uniformly throughout the brain tissue during normal, resting states. The movement of microglia has been modeled using two separate equations dependent on whether the microglia is in a ramified or in an activated state actively traveling up a concentration gradient of chemoattractant. While in the ramified state, microglia are modeled using a simple random walk along a continuous plane: ,,

,,

where xc is a matrix that tracks the position of the center of mass of each of the ith microglia at time t, and ξ(t) is Gaussian white noise with the constraint that the center of mass of two microglia cannot be less than 2R (R=microglial

IFMBE Proceedings Vol. 32

A Systems Biology Model of Alzheimer’s Disease Incorporating Spatial-temporal Distribution of Beta Amyloid

radius) at the same time point (microglial aggregation only occurs in the activated state). Microglia are assigned initial positions prior to running the simulation. Ramified microglia have several other constraints: microglia do not traverse the BBB and are confined to brain tissue; under extreme circ*mstances, macrophages in the blood may cross into the brain and differentiate into microglia; and microglia switch to the activated state once a specified threshold difference has been exceeded. When the local concentration of Aβ in the brain interstitial fluid reaches 200 nM or greater, microglia become activated and migrate up the concentration gradient towards the main source of Aβ (experimentally, this has been shown to be near neurons or near the basem*nt membrane of blood vessels). The directed movement of microglia towards a chemoattractant (chemotaxis) can be modeled using the Langevin equation of motion: ,

,

,

th

where xc represents the position of the i microglia at time t,

∇ϕ represents the Aβ concentration gradient, ξ represents

Gaussian white noise that the microglia would experience during chemotaxis, α is a positive constant that describes the strength of chemotaxis (α = 1 for our simulations), and κ describes whether it is positive chemotaxis (κ = +1; value used for our simulations) or negative chemotaxis (κ = -1).

VI. NEURAL NETWORK MODEL FOR NEURONS & SYNAPSES Neurons have been modeled using a modified McCulloch-Pitts network that has been previously developed by Butz et al [3]. The network is defined by several variables: N, NE, C, θ, Φ and β. N represents the number of logical neurons, NE is the number of excitatory neurons (1 to NE are excitatory, while NE to N are inhibitory), C is the NxN matrix of connections between neurons, θ is the common threshold of all neurons, Φ is the relative weight of inputs from inhibitory neurons, and β is the noise level in the threshold function. The state of the network at any given time, t, is defined by the vector zt: ,….,

,

0,1 , 1

,

207

,

,

The probability of neuron i being active in the next time instant is governed by the threshold potential (θ), the actual membrane potential (MP), the noise level (β), and the percentage afference (α) as: 1 /

1

The percentage afference models the shift in the probability of firing, and is related to the relative levels of Aβ. Changes in network connections are modeled by updating the connectivity matrix at each time step with respect to changes in pre- and post-synaptic elements. Decay of pre- and post-synaptic elements is proportional to the strength of existing connections and the relative level of Aβ. Pre-synaptic elements that are connected to a postsynaptic element that is lost are able to recombine and form a new synapse in future time steps, whereas the ‘lost’ postsynaptic element is removed from the post-synaptic pool. Free pre- and post-synaptic elements are updated to account for synaptic losses, recombinations, and strengthening of existing contacts. The number of possible contact offers is dependent on the number of free contacts that a neuron contributes to the network. The number of neurons in the network is also able to be varied throughout the simulation. Neuron loss is modeled by the calcium set-point hypothesis in combination with apoptosis occurring above a maximal beta amyloid concentration. Calcium concentration is directly correlated with membrane potential and activity level, and it has been previously determined that a neuron’s activity level should remain between 0.25<si To, the treatment is considered as growth inhibition; if Tx < To, there is no net growth after the treatment, and so its effect is considered as cell killing. Note that net growth values were generated by normalizing the data from each treatment to the control values, which did not receive DOX or NP. Statistical significance was identified by One-way ANOVA for the difference among treatment groups at the same DOX concentration. The p-value 660nm) (right) second principal component

B. Macular Pigment

Fig. 1 (left) cSLO average over 10 single images (right) standard fundus image, less contrastful, lesions are more visible but blood vessels appear brighter and washed-out

The initial approach is to subtract a constant background proportional to the pixel intensities along major blood vessels from the fundus camera images. A constant background is certainly idealized and often appears insufficient to correct images from elderly patients with highly fluorescent lenses or local, highly scattering retinal pathology leading to artifacts in the macular pigment maps as shown in Figure 2.

Fig. 2 (left) color fundus image, drusen appear as yellowish spots. (right) macular pigment map based on the two-wavelength formula in Section IIA. Blood vessels and optic disc are artifacts due to the background in standard fundus images

Different techniques have been developed to quantitatively measure macular pigment in the human retina [3,4,8]. National Eye Institute study patients have been measured by heterochromatic flicker photometry, but significant inter- and cross-method variations cause a need for more consistent and reliable quantification [1]. The fluorescence method appears consistent but requires a highly specialized optical system. We obtain spatial macular pigment maps from the fluorescence method while replacing the two-wavelength cSLO with a modified standard fundus camera, see Figure 4.

Fig.

4 (above) autofluorescence images obtained using two excitation bands (460-500nm, 520-600nm) with emission collected above 660nm at a standard fundus camera (below) spatial macular pigment map and radial profile centered around the fovea

IFMBE Proceedings Vol. 32

High-Resolution Autofluorescence Imaging for Mapping Molecular Processes within the Human Retina

C. Retinal Microvascular System

IV. CONCLUSIONS

Abnormalities of the retinal microcirculation is clinically very important to image in a variety of diseases such as diabetes and sickle cell disease. In the earliest stages of their associated retinal pathology local microvascular dropout appears to precede proliferation of new retinal vessels that are fragile and subject to leakage leading to edema and hemorrhage that frequently lead to visual loss. Currently, the retinal microvascular system is imaged by injecting fluorescein into an arm vein and taking a time series of fluorescence images as the fluorescein washes in and out of the retinal vessel and underlying choroidal microvasculature. The ability to map the finest retinal microcirculation non-invasively (without injection of fluorescein) might be particularly useful for following early microvasculature changes in retinal diseases. We found that the local attenuation of cSLO autofluorescence by hemoglobin in retinal microvessels can provide a simple noninvasive modality to obtain a map of the retinal microvascularization. The local presence of red blood cells within a given retinal capillary is a stochastic process in each cSLO image in which one pixel at a time is acquired (in < 1 microsec). Thus to map the retinal capillaries we require accurate registration and averaging over an image set curated to remove images distorted by spontaneous eye motions. In Figure 5, we show such an averaged cSLO autotofluorescence movie where the image was inverted and a standard edge detection algorithm applied to identify the microvascular network in a normal eye. The noninvasive character of this simple method allows for broad screening studies to detect and follow early local changes in the retinal microvasculature in preclinical disease in conditions where invasive fluorescein angiography would not be appropriate or counterindicated.

Fig. 5 (left) inverted cSLO average. (right) edge detection provides map of the retinal microvascular system

347

We have developed algorithms to analyze noninvasive multispectral retinal autofluorescence image sets. We have shown that this approach allows us to map with high resolution the distribution of different strongly absorbing retinal species within the retina (specifically lutein, zeazanthin, and hemoglobin) using widely available clinical imaging devices. We have identified and characterized a number of image artifacts that are driving further interactive refinements in our multispectral imaging protocol and our analysis algorithms. We hope to extend our clinical noninvasive mapping to include rhodopsin, oxy and deoxyHb, lutein, zeaxanthin, and the different principal fluorophores within the A2E pathway that maybe driving early age-related RPE pathology.

ACKNOWLEDGMENT The research was funded by the Intramural Research Program of NICHD/NIH, by NSF (CBET0854233), by NGA (HM15820810009), and by ONR (N000140910144). The authors gratefully acknowledge Prof. John J. Benedetto.

REFERENCES 1. Beatty S, van Kuijk FJ, Chakravarthy U (2008) Macular pigment and age-related macular degeneration: longitudinal data and better techniques of measurement are needed. Invest Ophthalmol Vis Sci 49(3): 843-845 2. Bird AC, Bressler NM, Bressler SB, Chisholm IH, Coscas G, Davis MD, de Jong PT, Klaver CC, Klein BE, Klein R (1995) An international classification and grading system for age-related maculopathy and age-related macular degeneration. The International ARM Epidemiological Study Group. Surv Ophthalmol 39(5): 367-374 3. Delori FC (2004) Autofluorescence method to measure macular pigment optical densities fluorometry and autofluorescence imaging. Arch Biochem Biophys 430(2): 156-162 4. Delori FC et al. (2001) Macular pigment density measured by autofluorescence spectrometry: comparison with reflectometry and heterochromatic flicker photometry. J Opt Soc Am A Opt Image Sci Vis 18(6): 1212-1230 5. Framme C, Brinkmann R, Birngruber R, Roider J (2002) Autofluorescence imaging after selective RPE laser treatment in macular diseases and clinical outcome: a pilot study. Br J Ophthalmol 86(10): 10991106 6. Holz FG, Bindewald-Wittich A, Fleckenstein M, Dreyhaupt J, Scholl HPN, Schmitz-Valckenberg S (FAM-Study Group) (2007) Progression of geographic atrophy and impact of fundus autofluorescence patterns in age-related macular degeneration. Am J Ophthalmol 143(3): 463472 7. Meyers SM, Ostrovsky MA, Bonner RF (2004) A model of spectral filtering to reduce photochemical damage in age-related macular degeneration. Trans Am Ophthalmol Soc 102: 83-93 8. Trieschmann M, et al. (2003) Macular pigment: quantitative analysis on autofluorescence images. Graefes Arch Clin Exp Ophthalmol 241(12) 1006-1012

IFMBE Proceedings Vol. 32

Local Histograms for Classifying H&E Stained Tissues M.L. Massar1, R. Bhagavatula2, M. Fickus1, and J. Kovačević2,3 1

Department of Mathematics and Statistics, Air Force Institute of Technology, Wright Patterson Air Force Base, USA 2 Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, USA 3 Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA

Abstract— We introduce a rigorous mathematical theory for the analysis of local histograms, and consider the appropriateness of their use in the automated classification of textures commonly encountered in images of H&E stained tissues. We first discuss some of the many image features that pathologists indicate they use when classifying tissues, focusing on simple, locally-defined features that essentially involve pixel counting: the number of cells in a region of given size, the size of the nuclei within these cells, and the distribution of color within both. We then introduce a probabilistic, occlusionbased model for textures that exhibit these features, in particular demonstrating how certain tissue-similar textures can be built up from simpler ones. After considering the basic notions and properties of local histogram transforms, we then formally demonstrate that such transforms are natural tools for analyzing the textures produced by our model. In particular, we discuss how local histogram transforms can be used to produce numerical features that, when fed into mainstream classification schemes, mimic the baser aspects of a pathologist's thought process. Keywords— histology, local histogram, occlusion.

I. INTRODUCTION In this paper, we consider some mathematical theory that arose during the development of an automatic classification scheme for histology, specifically an algorithm that classifies the type and positioning of tissues found in digital microscopy images of hematoxylin and eosin (H&E) stained tissue sections. Here, we focus on the motivation behind the new mathematics itself; more detail on the particular application and classification scheme is given in [1]. The motivating application arose in studies of embryonic stem (ES) cells undertaken by Dr. John A. Ozolek of the Children’s Hospital of Pittsburgh and Dr. Carlos Castro of the University of Pittsburgh. Understanding how ES cells differentiate into tissues will yield better insight into early biological development, and could provide advanced research into tissue regeneration and repair, the treatment of genetic and developmental syndromes, and drug testing and discovery [1]. The work here arose from Ozolek and Castro’s study of teratomas produced by injecting primate cells

into immunocompromised mice; a teratoma is a tumor which is known to contain tissues derived from each of the three primary germ layers of ectoderm, mesoderm and endoderm. Upon removal from the mice, the teratomas are sectioned, H&E stained, and digitally imaged using a microscope. An example of such an image is given in Figure 1.a; here, the purple-pink coloring is characteristic of H&E stain. In normal tissues, different tissue types are arranged in predictable ways. However, in teratomas, the tissues arrange themselves in seemingly chaotic fashions. Nevertheless, using their years of histology experience, Ozolek and Castro are able to look at these images and quickly discern which tissues are present, as well as their locations. In particular, for the image given in Figure 1.a, they have indicated the presence of several tissue types: cartilage, as typified by Figure 1.b, and concentrated in the lower left corner of the overall image; connective tissue, seen in detail in Figure 1.c, and forming a wide oval overall; and bone, detailed in Figure 1.d, and forming much of the center. Ozolek and Castro have large numbers of such images – many sections of many teratomas – and hope to gain new biological insight by determining the degree to which they contain certain tissues, as well as the spatial relationships between tissues. However, in order to gain this insight, they first need to have the tissues in these images classified according to type. When accomplished by hand, this task, though straightforward, is time-consuming, error-prone and laborious. When analyzing many images, the cost of this manual labor becomes prohibitively high, both in terms of time and money. As such, what is needed is an image processing system which can perform this analysis with minimal user input. In the following section, we discuss some basic concepts from the theory of image classification that we have borne in mind while designing such a system. These considerations lead to our use of local histogram transforms, whose basic properties are discussed in Section III. We further discuss an occlusion-based mathematical model for the histological images in question, using it to provide a rigorous analysis of the potential use of local histograms as image classification features.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 348–352, 2010. www.springerlink.com

Local Histograms for Classifying H&E Stained Tissues

349

(a) Histology image

(b) Cartilage

(c) Connective Tissue

(d) Bone

(e) Two-dimensional, redblue (RB) histogram of the red (x-axis) and blue (y-axis) channels of (a).

(f) Two-dimensional histogram of (b). The peak’s location indicates the dominance of dark blue-purple.

(g) Two-dimensional histogram of (c). Here, the dominant color is brighter than that of (b) and is more balanced in red/blue.

(h) Two-dimensional histogram of (d). The intensity is more similar to that of (b), but the balance of color is more similar to that of (c).

Fig. 1 Histological images and the two-dimensional histograms of their red and blue channels

II. CLASSIFICATION Most classification systems have two main components: a feature extractor and a decision rule. The feature extractor is a collection of transforms which, when applied to a given image, produce a feature vector which is intended to represent the essential properties of that image. The second component of a classification system is a decision rule, namely a function that assigns a label to a given feature vector. For example, for the H&E stained histology images depicted in Figure 1, Ozolek and Castro have indicated that when performing manual tissue classification, they believe their minds are making use of image features such as the color, shape, size and texture of the tissue structures. Based upon these qualities, and their experience and training, they are able to assign a label, such as “cartilage,” “connective tissue,” or “bone” to a given portion of a histological image. Our goal is to automate this process. In automated classification schemes, image features are produced using mathematical formulae. For example, we have investigated an automated classification system [2] that computes Haralick texture features of discrete wavelet transforms of the images in question. Once computed, this feature vector is then fed into a decision rule, typically a neural network or a support vector machine, to produce a label. In this paper, we will not comment further about

decision rules, and will rather focus entirely on our choice of histogram-based image features. There are two reasons why we shall make use of histograms, as opposed to other features. One reason is that histograms are easy to understand intuitively, and this intuition has been the key for us to conjecture and prove rigorous results concerning them. The second, more significant, reason is that histograms are directly related to the image features that Ozolek and Castro have indicated they themselves use when classifying histological images. For example, again consider the histological image given in Figure 1.a – a 2-D histogram of its pixel values is given below it in Figure 1.e. Here, the 2-D histogram is obtained by counting how many pixels have a given value of red (x-axis) and blue (y-axis); we have discarded the green channel of the RGB image, as it contains little distinguishing information in the purple-pink class of H&E stained images. As this histogram is taken over the entire image, it combines information from all tissues. Meanwhile, tissue specific histograms are given in Figures1.f, 1.g and 1.h. For example, in Figure 1.f, the low-valued, off-diagonal blob corresponds to the dominant, more-blue-than-red purple color of cartilage, while the long tail of the distribution is indicative of the white cell interiors. Meanwhile, the light pink of connective tissue and deeper red of bone can be discerned in the histograms of Figures 1.g and 1.h, respectively. We note that while these histograms themselves may be regarded as image

IFMBE Proceedings Vol. 32

350

M.L. Massar et al.

(a)

Synthesized cartilage background

(b)

Synthesized cartilage foreground

(e)

RB histogram of (a)

(f)

RB histogram of (b)

(c)

Synthesized indicator function

(d)

Synthesized cartilage texture

(g)

RB histogram of (d).

Fig. 2 Synthesizing cartilage-like textures. A {0,1}-valued function (c) indicates where to occlude a background texture (a) by a foreground texture (b). The histogram of the synthesized image is a combination of the histograms of the background and foreground; the relative heights of the peaks of (g) can be used to infer how much background (cell exterior) and foreground (cell interior) is characteristic of a given tissue features, we have only been using the locations and heights of their dominant peaks. More importantly, we see that little information can be gleaned from looking at the histogram of the entire image – our goal is to determine which tissues are present at any given location, and as such, our histograms must have some location dependence. As such, it is natural to consider local histograms, that is, histograms that are only computed over a small neighborhood of any given pixel.

III. LOCAL HISTOGRAMS AND OCCLUSIONS Local histograms are a well-studied signal processing tool [3-8]. Here, we define local histograms for images f which are regarded as functions from one finite abelian group G of pixel locations into another finite abelian group P of pixel values. For example, for the 1200 by 1200 image given in Figure 1.a, we have G = Z1200 x Z1200, where ZN denotes the group of integers from 0 to N-1, in which arithmetic is performed modulo N. Here, the pixel values themselves lie in P =Z256 x Z256 ; we are considering only the 8-bit red and blue channels of the original RGB image. The local histogram of such an image f is defined in terms of a window, that is, a nonnegative function w over G that sums to one. To be precise, the local histogram of f with respect to w is:

(LH w f )( g , p ) = ∑ w( h )δ f ( g + h ) ( p ). h∈G

For any fixed pixel value p in P, the corresponding portion of the local histogram may be computed by filtering the function which indicates where f assumes this particular pixel value with the window w. Even with this realization, the computation and storage of a local histogram requires nontrivial resources: for our running example, the local histogram is a four-dimensional array of size 1200 x 1200 x 256 x 256. In order to determine the appropriateness of using local histograms as feature transforms for histological images, we consider an occlusion-based model for synthesizing test images. Similar models have previously been considered in [9-13]. To be clear, given an indicator function I, which assigns a label from 0 to K-1 to each pixel location in G, we define the corresponding occlusion of a collection of images {f0,…,fK-1} to be the composite image:

(occ { f } )( g ) := f S

K −1 k k =0

I (g)

( g ).

An example of an image generated in this fashion is given in Figure 2.d. Here, the number of images is K=2, with f0 given in Figure 2.a, with f1 given in Figure 2.b, and the indicator function I given in Figure 2.c. Though by no means photorealistic, this synthesized image nevertheless possesses much of the basic color and shape information of cartilage. The reason we use such a simple image model is that it permits a rigorous analysis of the properties of local histograms. Indeed, examining the (global) histograms of the background, foreground and composite images of Figure 2, we note that

IFMBE Proceedings Vol. 32

Local Histograms for Classifying H&E Stained Tissues

(a)

351

(b)

(c)

(d)

Fig. 3 Building complicated indicator functions from simple ones: (a) and (b) give two {0,1}-valued indicator functions; (c) gives a {0,1,2}-valued indicator function obtained by laying (a) over (b); (d) produces a distinct {0,1,2}-valued indicator function obtained by laying (b) over (a) the histogram of the composite is nearly a convex combination of the histograms of the background and foreground. Indeed, it is possible to show that such a result will always occur, even for local histograms, provided one does not focus on a single means of occlusion, but rather computes an expectation over every possible occlusion. To be precise, consider a probability density function P defined over the class of all {0,...,K-1}-valued indicator functions I over G. We say that P is fair if for all k = 0,...,K-1, there exists some real scalar λk such that:

∑ P( I )δ

k

( I ( g )) = λk .

I

When P is fair, we can prove that, on average, the local histogram of a composite image is indeed a convex combination of the local histograms of each image: Theorem. If P is fair, then K −1

E I [LH w (occ I { f k }kK=−01 )] = ∑ λk (LH w f k ). k =0

This begs the question of whether or not fairness is a realistic assumption. Our current research is focused on answering this question. In particular, we are studying methods of producing more complicated occlusion indicator functions from simpler examples, as given in Figure 3. In particular, new indicator functions may be produced by overlaying other known examples of them. More significantly, we can extend this notion of overlay to probability density functions over the set of all indicator functions, and can use this idea to build more complicated fair probabilities from simpler fair ones.

ACKNOWLEDGMENT Massar and Fickus were supported by AFOSR F1ATA09125G003. Bhagavatula and Kovačević were

supported by NIH through award NIH-R03-EB009875 and the PA State Tobacco Settlement, Kamlet-Smith Bioinformatics Grant. The authors would like to thank Dr. John A. Ozolek of the Children’s Hospital of Pittsburgh and Dr. Carlos Castro of the University of Pittsburgh. The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.

REFERENCES 1. Bhagavatula R., Fickus M., Ozolek J. A., Castro C. A., Kovačević J. (2010) Automatic identification and delineation of germ layer components in H&E stained images of teratomas derived from human and nonhuman primate embryonic stem cells. To appear in Proc. IEEE Int. Symp. Biomed. Imag. 2. Chebira A., Ozolek J. A., Castro C. A., Jenkinson W. G., Gore M., Bhagavatula R., Khaimovich I., Ormon S. E., Navara C. S., Sukhwani M., Orwig K. E., Ben-Yehudah A., Schatten G., Rhode G.K., Kovačević J. (2008) Multiresolution identification of germ layer components in teratomas derived from human and nonhuman primate embryonic stem cells. Proc. IEEE Int. Symp. Biomed. Imag. 979–982. 3. Koenderink J. J., van Doorn A. J. (1999) The structure of locally orderless images. Int. J. Comput. Vis. 31:159–168. 4. van Ginneken B., ter Haar Romeny B. M. (2000) Applications of locally orderless images. J. Vis. Commun. Image Represent. 11:196– 208. 5. Koenderink J. J., van Doorn A. J. (2000) Blur and disorder. J. Vis. Commun. Image Represent. 11:237–244. 6. van de Weijer J., van den Boomgaard R. (2001) Local mode filtering, Proc. IEEE Comput. Soc. Conf. Comput. Vis. & Pattern Recognit. 2:428–433. 7. Hadjidemetriou E., Grossberg M. D., Nayar S. K. (2004) Multiresolution histograms and their use for recognition. IEEE Trans. Pattern Anal. & Mach. Intell. 26:831–847. 8. Dalal N., Triggs B. (2005) Histograms of oriented gradients for human detection, Proc. IEEE Comput. Soc. Conf. Comput. Vis. & Pattern Recognit. 1:886–893. 9. Lee A. B., Mumford D. (1999) An occlusion model generating scaleinvariant images, Proc. IEEE Workshop Stat. & Comput. Theor. Vis. 10. Lee A. B., Mumford D., Huang J. (2001) Occlusion models for natural images: A statistical study of a scale-invariant dead leaves model. Int. J. Comput. Vis. 41: 35–59. 11. Mumford D., Gidas B. (2001) Stochastic models for generic images. Quart. Appl. Math. 59:85–111.

IFMBE Proceedings Vol. 32

352

M.L. Massar et al.

12. Ying Z., Castanon D. (2002) Partially occluded object recognition using statistical models. Int. J. Comput. Vis. 49:57–78. 13. Bordenave C., Gousseau Y., Roueff F. (2006) The dead leaves model: A general tessellation modeling occlusion. Adv. Appl. Probab. 38:31– 46. Author: Matthew Fickus Institute: Air Force Institute of Technology Street: 2950 Hobson Way City: WPAFB Country: USA Email: [emailprotected]

IFMBE Proceedings Vol. 32

Detecting and Classifying Cancers from Image Data Using Optimal Transportation G.K. Rohde1, W. Wang1, D. Slepcev2, A.B. Lee3, C. Chen1, and J.A. Ozolek4 1

Center for Bioimage Informatics, Biomedical Engineering Department, Carnegie Mellon University, Pittsburgh, PA, 15213 USA 2 Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA, 15213 USA 3 Department of Statistics, Carnegie Mellon University, Pittsburgh, PA, 15213 USA 4 Department of Pathology, Children's Hospital of Pittsburgh, Pittsburgh, PA, 15201 USA

Abstract— We describe a new approach to digital pathology that relies on measuring the optimal transportation (Kantorovich-Wasserstein) metric between pairs of nuclei obtained from histopathology images. We compare the approach to the standard feature space approach and show that our method performs at least as well if not better in automatically detecting and classifying different cancers of the liver and thyroid. 100% classification accuracy is obtained in 15 human test cases. In addition, we describe methods for using the geometric space framework to visualize and understand the differences in the data distribution that allow one to classify the data with high accuracy. Keywords— Optimal transportation, nuclear structure, chromatin, pathology, classification.

I. INTRODUCTION A. Motivation Basic research in cancer treatment has focused on uncovering molecular signatures of tumors and designing new therapies that target specific growth and signalingpathways[1], [2].Before therapy, however, an accurate diagnosis must be made. Surgical pathologists have used visual interpretation of nuclear structure to distinguish cancer from normal tissue for many years [3]. Aberrations in the genetic code and the transcription of different messenger RNAs lie at the heart of transformation from normal to pre-malignant and malignant lesions [4]. These changes occur in the nucleus and are accompanied by the unfolding and repackaging of chromatin that in part or in whole produces changes in nuclear morphology (size, shape, membrane contours, the emergence of a nucleolus, chromatin arrangement, etc.). Nuclei can be big, small, round, elongated, bent, etc. Cells can have their chromatin distributed uniformly inside the nucleus, along its borders, concentrated into small regions (dots), anisotropically distributed, and with any combination of the above. We propose a new approach to describe the distribution of nuclear structure in different tissue classes (cancers). In contrast to most previous works, in which each nucleus image is reduced to a set of numerical features, we utilize a

geometric approach to quantify the similarity of groups of nuclei. Beyond automated classification, our approach seeks to also provide easy-to-visualizeinformation that characterizes and differentiates normal versus cancerous populations of cells. In this work we focus particularly on two diagnostic challenges: one in the liver and one in the thyroid. However, we believe our approach could be used whenever large quantities of nuclei can be reliably segmented. B. Previous Works on Automated Digital Pathology Computational approaches have emerged as very powerful tools for reproducible and automated cancer diagnosis based on histopathology digital images. For decades, numerous papers have been published using computational methods to separate diagnostic entities, and some commercial software packages have been developed to screen for cancer cells with varying degrees of success [5]. The overwhelming majority of computational approaches follow a standard feature-based procedure where an image can be represented by a set of numerical features (see [6], [7], [8] for reviews). These methods can be described as a “pipeline” consisting of: image preprocessing (normalization, segmentation), feature extraction, and classification of the state of the tissue (e.g. normal or diseased) (see [5], [8], [9] for a few examples). These methods have been applied to the diagnosis of several types of cancers. While successful in some cases (see our earlier work [10],where we have applied such an approach to some of the same data used in the results shown below), feature-based methods have some important limitations. First, although classification can be accomplished in some cases, it is difficult to learn useful and biologically relevant information about the cells or tissues. This is due to the fact that when classifiers are used in multidimensional feature spaces, they rely on combinations (linear or nonlinear) of features each with different units, making physical interpretation notoriously difficult. Secondly, the reduction of each image to a set of features results in compression of information. In this context information from the digital image that may ultimately have diagnostic or biological significance is discarded.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 353–356, 2010. www.springerlink.com

354

G.K. Rohde et al.

C. Overview of Our Contribution: A Geometric Framework for Nuclear Morphometry Using Optimal Transportation We describe a new approach for nuclear chromatin morphometry and pathology that utilizes the optimal transportation (OT) metric for quantifying the distribution of nuclear morphometry of different tissue classes. We believe the OT metric can capture some of the important information that defines the differences in nuclear structure in different cells (see Figure 1C for a few examples). More precisely, we utilize the OT metric to quantify relatively how much chromatin is distributed in which region of the nucleus (see subsection III-A for more details). Once a metric can be computed, classification of sets of nuclei is achieved with a kernel support vector machine approach, utilizing the distances given by the OT metric, in combination with a majority voting procedure. We compare the classification accuracies of the OT metric with several implementations of the more standard feature-based approach often utilized in digital image pathology. We also devise methods for visualizing and understanding the differences between nuclear distributions in different tissues (normal vs.cancerous), which utilizes geodesics derived from the OT framework, together with the Fisher Linear Discriminant Analysis (LDA) technique.

II. DATA AND PRE-PROCESSING A. Tissue Processing and Imaging Tissue blocks were obtained from the archives of the University of Pittsburgh Medical Center. Cases for analysis can be separated into two categories, thyroid and liver. Thyroid cases included five resection specimens with the diagnosis of follicular adenoma of the thyroid (FA) and five cases of follicular carcinoma of the thyroid (FTC); liver cases include five cases of fetal-type hepatoblastoma (HB). Tissue sections were cut at 5 micron thicknessand stained using the Feulgen technique whichonly stains DNAin deep magenta (see Figure 1A).All images were acquired using alight microscope with a 100X objective (see [10] for details).Slides were chosen by the pathologist (J.A.O.) that contained both lesion (HB, FA and FTC) and adjacent normal appearing tissue (NL). For each case, between 10 and 20 random fields were imaged to guarantee that at least 200 nuclei were obtained, for both lesion and normal tissue. B. Segmentation, Intensity Normalization and Preprocessing Nuclear segmentation consisted of the following three-step procedure that included a graph cut initialization method [11] and a level set optimization [12] for obtaining smooth contours segmenting each nucleus. In the end, the pathologist

Fig. 1 Sample image data. A: raw image. B: segmented image. C: individual segmented nuclei after preprocessing. Some sample nuclei show variations in size, shape and etc. Note that each of these images has been contrast stretched for best visualization (J.A.O.) reviewed all the segmented nuclei, and removes nuclei that were incorrectly segmented, or were imaged out of focus. A typical segmentation result is shown in Figure 1B.Images containing individual nuclei were converted to grayscale by selecting the green channel from the RGB images, and inverting the intensity values such that a zero (color coded in black) corresponds to the relative minimum amount of chromatin in the nucleus. All nuclei were normalized so that the sum of their intensity values is 1. This was done to guarantee that non-uniformities related to staining and image acquisition, from case to case, are not able to interfere with our method. In total, we extracted 1550 nuclei for our experiment. A few sample nuclei chosen for the entire data are displayed in Figure 1C.Nuclei images were also preprocessed as in our previous worksto eliminate, approximately, variations due to arbitrary rotation, translation, and coordinate inversions of each nucleus [13].

III. METHODS A. Optimal Transportation for Comparing Nuclear Chromatin Here we describe the optimal transportation metric used for quantifying and classifying nuclear structure.In our application, each nuclear structure is represented in a gray level digital image (of size 192 x 192 pixels). Each image I containing one single nucleus can be represented as M

I = ∑ v iδxi

(1)

i=1

whereδxi is a Dirac delta function at pixel locationxi, M is the number of pixels in image I, and vi are the pixel intensity values.To accelerate the computation, we use a point mass approximation to the model the chromatin distribution of each nucleus. In specific, we use Lloyd’s weighted K-means algorithm [14] to adjust the position and weights

IFMBE Proceedings Vol. 32

Detecting and Classifying Cancers from Image Data Using Optimal Transportation

of a set of N< 800 particle masses to approximate the total intensity distribution of each nuclei. In the discrete setting, the OT minimization problem reduces to finding N p Nq

d(I0,I1 ) = min ∑ ∑ c (x i , y j )f i, j f

subject to:

Nq

∑ f i, j = I1(y j ), i=1

(2)

i=1 j=1

Np

∑f

i, j

= I0 (x i ), f i, j ≥ 0

j=1

withNp and Nq the number of masses chosen for representing images I0 and I1, and c(x,y) is the "cost" of transporting unit mass located at x to location y. We use the quadratic symmetric cost c(x,y) = c(y,x) = |x - y|2.We utilize Matlab'simplementation to solve the linear program. We note that the optimal transportation distance metric has been used in the past for different image analysis problems [15], [16]. The geodesic interpolation between I0 and I1 can be approximated byIαwithα∈ [0, 1] Nq

Iα (y j ) = ∑ f i, j I0 (x i )δ ((1− α ) − αy j )

B. Supervised Classification 1) Kernel based support vector machines: In our own previous experience we have found that the support vector machine (SVM) method, when combined with a simple voting strategy, performed best when compared with other classification methods for determining the class of a given set of nuclei [10].Weuse the kernel SVM [17] to train and test the data. In our work, we utilize the radial basis function (RBF) kernel 2⎞ ⎛ K (I ,I )= exp⎜ −γ f (I ) − f (I ) ⎟, γ ≥ 0 whenever numerical features i

j

i

j

are used, where f is the function to compute feature from images. In order to utilize the OT distances described above, the kernel is modified as K (Ii ,I j )= exp −γ ′dOT (Ii ,I j )2 , γ ≥ 0 . For

(

C. Characterizing Distributions of Nuclei The geodesics that connect the nuclear structures in the entire dataset can be used to characterize and contrast the differences between different tissue classes. The idea is to interpret each nuclear structure as a point in the OT manifold and seek geodesics onto which the projection (in the same metric sense) of nuclear exemplars from different tissue classes most differs according to some quantitative criterion.We use the Fisher Linear Discriminant Analysis (LDA) method [19] to find such geodesics automatically. However, because explicit "coordinates" for each nuclear structure are not available (only pariwise distances), we first use multidimensional scaling (MDS) to find a Euclidean embedding for the data; then use Fisher LDA in this Euclidean space to find the most discriminating direction.

IV. RESULTS

(3)

i=1

)

multiple class problems, we use "one-versus-one" strategy[18] to reduce the single multiclass problem into multiple binary problems and used a max-wins voting strategy to combine these binary results and classify the testing instance. 2) Cross validation: Cross validation is performed to select the optimal parameters as well as test the average classification accuracy of the system. We use "leave-one-out" strategy to separate the data into training and testing date, where data from one case is used for testing and the rest cases are used for training the classifier. In order to train a classifier that has good predict accuracy, we use k-fold cross validation to further separate the training set into two parts (lower-level), and search for good error penalty C (determine how SVM tolerate errors when training), as well as the kernel parameter γ, which have the best accuracy in this k-folds. We set k=10, and perform an exhaustive search for the two parameters.

355

Here we describe results obtained in analyzing nuclear structure in two different diagnostic challenges, one in the liver and the other of thyroid cancers. We first show that the distances computed using the OT framework could be used to achieve similar accuracy to the traditional feature approach to this problem described in detail in [10]. We then demonstrate how the OT framework described above can be useful to extract meaningful quantitative information depicting the differences (in a distribution sense) that allow the data to be automatically classified. The results of classifying individual liver using RBF kernel based SVM methods for both features and OT metric are contained in Table I. For thyroid cases (omitted for brevity), features and OT metric based method have similar performance (average accuracy for feature based: NL 80.6%, FA 61.7%, FTC 54.7%; for OT metric: NL 80.6%, FA 58.4%, FTC 62.4%). We note that both feature-based and OT-based classifiers are identical in their implementation using the kernel SVM method; the only difference is in the actual distance (OT vs. feature-based normalized Euclidean distances). We use the automatic method described in section III.C to identify discriminant geodesic projections for liver the liver data(Figure 2). Results suggest that, according to the available data, the most important information for discriminating between NL and HB is the amount, in relative terms, of chromatin concentrated towards the border of the nucleus. The histogram shown inFigure 2 suggests that it is uncommon for HB nuclei to have a chromatin distribution concentrated exclusively at the nuclear periphery. The same experiment suggests that nuclear size is the most discriminating information for the thyroid data (results not shown).

IFMBE Proceedings Vol. 32

356

G.K. Rohde et al.

Table 1 Average Classification accuracy in liver data Case 1 Case 2 Case 3 Case 4 Case 5 Average

Features 89.0% 92.0% 94.0% 80.0% 71.0% 85.2%

OT metric 93.0% 91.0% 92.0% 89.0% 84.0% 89.8%

V. DISCUSSION AND CONCLUSION A new approach for automated digital pathology using nuclear structure is described. The approach is based on quantifying chromatin morphology in different tissues classes (normal, cancer A, cancer B, etc.) using the optimal transportation metric between pairs of nuclei. These distances are utilized within a supervised learning framework to build a classifier capable of determining the tissue class to which a particular set of nuclei belongs. We compare our approach to the standard feature-based classification approach using image data from a total of 15 human cases. Results show that in most cases, on average, the optimal transportation metric performs at least as well or better than a popular feature-based implementation. In addition to automated classification we also describe how optimal transportation-based geodesic paths can be used to summarize differences between the nuclear structure (chromatin distribution) of different tissue classes. The approach involves computing the pairwise distances between all nuclei in the dataset and using the MDS technique to find a Euclidean embedding for the data. Fisher LDA is the applied to discover the modes of variation that are most responsible for distinguishing two classes of nuclei. Once the variation, in the form of an optimal transportation geodesic, is computed, a projection of the data can be used to visualize the main differences in chromatin configuration in two or more tissue classes.

REFERENCES 1. W.W. Ma and A.A.Adjei, “Novel agents on the horizon for cancer therapy.”, CA Cancer J Clin, vol. 59, no.2, pp.111-137, (2009) 2. C.M. Schlotter, U. Vogt, H.Allgayer, and B. Brandt, “Molecular targeted therapies for breast cancer treatment.”, Breast Cancer Res, vol.10, no.4, pp.211, (2008) 3. G.N. Papanicolaou, “New cancer diagnosis,” in Proceedings of the 3rd race etterment conference, Michigan, (1928), pp. 528 4. D. Zink, A.H. Fischer and J.A. Nickerson, “Nuclear structure in cancer cells,” Nat. Rev. Cancer, vol.4, no.677-687, (2004) 5. E. Bengtsson, “Fifty years of attempts to automate screening for cervical cancer,” Med. Imaging Tech. vol.17, pp.203-210, (1999)

Fig. 2 Geodesic identified automatically by our method. Distribution of the data over this geodesic shows variation in where chromatin is positioned within the nucleus (bottom). The histograms over the corresponding images indicate the relative number of nuclei in each population of the data (normal vs HB) that looked closest (in the OT sense) to it 6. P.H. Bartels, T. Gahm and D. Thompson, “Automated microscopy in diagnostic histopathology: From image processing to automated reasoning,” I. J. Imaging Systems and Technology, vol. 8, no.2, pp214223, (1998) 7. C. Demir and B. Yener, “Automated cancer diagnosis based on histopathological images: a systematic survey,” Tech. Rep. TR-05-09, Rensselaer Polytechnic Institute, (2005) 8. K. Rodenacker and E. Bengtsson, “A feature set for cytometry on digitized microscopy images,” Anal. Cell. Pathol., vol.25, pp.1-36 (2003) 9. J. Gil and H.S. Wu, “Application of image analysis to automatic pathology: realities and promises”, Cancer Investigation, vol.21, no.6 pp.950-959, (2003) 10. W. Wang, J.A. Ozolek and G.K. Rohde, “Detection and classification of thyroid follicular lesions based on nuclear structure from histopathology images”, Cytometry A, Jan (2010) 11. Y. Boyokov and G. Funka-Lea, “Graph cuts and efficient n-d image segmentation”, Intern. J. Comp. Vis., vol.70, no.2, pp.109-131, (2006) 12. C. Li, R. Huang, Z. Ding, C. Gatenby, D. Metaxas and J. Gore, “A variational level set approach to segmentation and bias correction of images with intensity inhom*ogeneity”, Int. Conf. Med Image Comput Assist Interv, vol.11, pp.1083-1091, (2008) 13. G.K. Rohde, A.J.S. Ribeiro, K.N. Dahl and R.F. Murphy, “Deformation-based nuclear morphometry: capturing nuclear shape variation in Hela cells” Cytometry A, vol.73, no.4, pp.341-350, (2008) 14. S.P. Lloyd, “Least squares quantization in pcm”, IEEE Trans. Inf. Theory, vol.28, no.2, pp.129-137, (1982) 15. Y. Rubner, C. Tomassi and L.J. Guibas, “The earth mover’s distance as a metric for image retrieval”, Intern. J. Comp. Vis. Vol. 40, no.2, pp.99-121, (2000) 16. S. Haker, L. Zhu, A. Tennembaum and S. Angenent, “Optimal mass transport for registration and warping”, Intern. J. Comp. Vis., vol. 60, no.3, pp.225-240, (2004) 17. A. Aizerman, E.M. Braverman and L.I. Rozoner, “Theoretical foundations of the potential function method in pattern recognition learning”, Automation and Remote Control, vol.25, pp.821-837, (1964) 18. U.H.G. Krepel, “Pairwise classification and support vector machines”, Advances in Kernel Methods Support Vector Learning, pp.255-268, (1999) 19. R.A. Fisher, “The use of multiple measurements in taxonomic problems”, Annals of Eugenics, vol. 7, pp. 179-188, (1936)

IFMBE Proceedings Vol. 32

Nanoscale Imaging of Chemical Elements in Biomedicine M.A. Aronova1, Y.C. Kim2, A.A. Sousa1, G. Zhang1, and R.D. Leapman1 1

National Institutes of Health / National Institute of Biomedical Imaging and Bioengineering, Bethesda, Maryland, USA 2 Center for Computational Materials Science, Naval Research Laboratory, Washington DC, USA

Abstract— Imaging techniques based on transmission electron microscopy can elucidate the structure and function of macromolecular complexes in a cellular environment. In addition to providing contrast based on structure, electron microscopy combined with electron spectroscopy can also generate nanoscale contrast from endogenous chemical elements present in biomolecules, as well as from exogenous elements introduced into tissues and cells as imaging probes or as therapeutic drugs. These capabilities complement biomedical imaging used in diagnostics while also providing insight into fundamental cell biological processes. We have developed electron tomography (ET) techniques based on unconventional imaging modes in the electron microscope to map specific types of macromolecules within cellular compartments. ET is used to determine the three-dimensional structure from a series of two-dimensional projections acquired successively by tilting a specimen through a range of angles, and then by reconstructing the three-dimensional volume. We have focused on two approaches that combine ET with other imaging modes: energy filtered transmission electron microscopy (EFTEM) based on collection of inelastically scattered electrons, and scanning transmission electron microscopy (STEM) based on collection of elastically scattered electrons. EFTEM tomography provides 3D elemental mapping and STEM tomography provides 3D mapping of heavy atom clusters used to label specific macromolecular assemblies. These techniques are illustrated by EFTEM imaging of the subcellular nucleic acid distribution through measurement of the intrinsic marker, elemental phosphorus; and by STEM imaging of gold clusters used to immunolabel specific proteins within the cell nucleus. We have also used the EFTEM and STEM techniques to characterize nanoparticles that might be used as drug delivery systems.

proteins involved in disease mechanisms, has produced large numbers of candidate molecules that can interact with a particular biological target of interest. The imaging field has embraced this opportunity through the discovery and development of a range of novel approaches for generating protein and gene specific contrast in an image. For example, in magnetic resonance imaging (MRI), a contrast agent not only gives clues about the location of specific organ abnormality, but can also be used to quantify its size, growth rate and possibly chemical composition [1, 2]. MRI and many other imaging modalities are enabling the non-invasive visualization and quantification of specific biological processes [3]. On the cellular level, electron microscopy (EM), which bridges the gap in spatial resolution between x-ray crystallography and light microscopy, provides detailed structural information. The related approach of electron tomography (ET) [4] together with various reconstruction algorithms can generate the 3D organization of cells and their components. However, quantitative information that strengthens and enhances ET is rarely obtained, since it is difficult to extract and interpret the data. One of our goals has been to develop and implement efficient acquisition and quantitative interpretation of EM data to answer some of the basic questions related to cellular biology.

Keywords— Electron tomography, elemental mapping, energyfiltered imaging, scanning transmission electron microscopy.

A. TEM

I. INTRODUCTION Emerging methods in nanoscale imaging create new opportunities to explore basic biological processes in the life sciences. Each of these imaging techniques contributes a unique type of information, corresponding to a specific mechanism of contrast generation. However, each technique also has trade-offs between spatial resolution and an avalanche of information regarding the specific genes and

II. IMAGING MODALITIES

The most commonly used operation mode in the electron microscope is TEM, in which electrons transmitted through the specimen are imaged on a CCD detector. With recent advances in software and hardware, TEM can now be combined with ET in a relatively easy way to obtain 3D density maps. These maps depending on the type of specimen and preparation technique can provide a remarkable amount of detail. For example in case of frozen-hydrated viruses [5] or prokaryotic cells [6], high resolution 3D x-ray structures can be docked into the lower resolution electron density obtained from TEM. However it is difficult to extract

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 357–360, 2010. www.springerlink.com

358

M.A. Aronova et al.

quantitative information from these types of 3D density maps, since contrast is generated either from phase contrast in the case of frozen-hydrated specimens or from high-angle scattering in the case of heavy atom-stained preparations. B. EFTEM Energy filtered transmission electron microscopy (EFTEM), using the inelastic signal from excitation of inner shell electrons by the incident electron beam [7, 8], have improved the sensitivity and spatial resolution of elemental mapping. This is facilitated with the latest generation of charge coupled device (CCD) detectors, design of magnetic energy filters and flexible control of the data acquisition. In the EFTEM the transmitted electrons are dispersed in energy at an energy-selecting slit (Fig. 1). This slit selects a specific energy range, and magnetic lenses behind the slit produce an energy-selected image on the CCD detector. Elemental maps are obtained by subtracting a background image recorded at an energy loss below a core edge from an

image recorded above the core edge. Alternatively, the energy-selecting slit can be removed and the energy loss spectrum can be recorded on the CCD camera, which allows quantitative elemental analysis. EFTEM mapping has been applied successfully to biological systems as well as to a wide range of materials [8, 9, 10]. In biological applications, elemental maps of unstained sections provide information about the 2-D distributions of various classes of biomolecules within a sectioned cell [11]. For example, the nitrogen signal provides information about the total distribution of biomolecules, including proteins and nucleic acids [12, 13]. Sulfur indicates the presence of proteins that contains high levels of the amino acids, cysteine, and methionine [14]. Phosphorus reveals the distribution of nucleic acids, phosphorylated proteins, and phospholipids [15, 16], since small molecules containing phosphorus are mostly removed in plastic-embedded preparations. In the cell nucleus, the relative concentrations of phospholipids and phosphoproteins are low enough for, the phosphorus distribution to provide information about the packing density of DNA within chromatin [17]. However, a quantitative interpretation of elemental distributions requires a 3-D analysis due to ambiguity caused by overlapping structures in 2-D projections. 3-D elemental distributions can be obtained by combining EFTEM with ET. C. STEM In the scanning transmission electron microscopy (STEM) mode, a finely focused nanometer sized probe is scanned cross a specimen, and a variety of signals detected at each pixel in a 2D array. An annular dark-field detector placed after the specimen picks up the high-angle elastic scattering produced by heavy atoms. A bright-field detector situated on axis collects transmitted electrons, which is advantageous for imaging micrometer-thick specimens. In addition, it is also possible to acquire EELS data at each pixel to obtain hyperspectral images from which compositional information can be extracted.

III. TOMOGRAPHY IN THE ELECTRON MICROSCOPE Fig. 1 Schematic

diagram of electron microscope with EFTEM, STEM, and tomography capabilities. Essential components are: magnetic lens (L), magnetic energy filter (M), energy selecting slit (S) and chargecoupled device camera (CCD). At the slit plane (arrow) an electron energy loss spectrum (EELS) is formed, and a 2D EFTEM image is produced at the CCD. With this arrangement 3D information can also be obtained in EFTEM, STEM and TEM tomography modes

In ET, a specimen of thickness from 50 nm to 1000 nm is tilted over a range of angles and imaged in an EM to provide a series of projections onto planes perpendicular to the beam direction (Fig.2). By backprojecting the images and summing over all the orientations, it is thus possible to obtain a three-dimensional reconstruction of the specimen [18, 19]. This technique is increasingly finding applications in numerous fields of science [4, 20].

IFMBE Proceedings Vol. 32

Nanoscale Imaging of Chemical Elements in Biomedicine

359

Fig. 2 A representation of tomography in 2D. The projections of the specimen (curves) are recorded as the specimen is tilted. To obtain the original object these projections can be then reprojected using various reconstruction algorithms Although conventional ET provides important structural information about a cell at the macromolecular scale, it is useful to obtain other types of complementary information in order to identify and quantify specific types of macromolecules within cellular compartments.

IV. APPLICATIONS With the aim of extending the conventional 2D approach to EFTEM, we have combined ET and EFTEM. This technique, which we call quantitative electron spectroscopic tomography (QuEST), is demonstrated by determining the subcellular distribution of nucleic acids by measuring elemental phosphorus. Specifically, excitations of inner shell electrons of phosphorus atoms in the specimen results in a characteristic energy loss at 132 electron volts (L2,3 edge) correspond to ejection of 2p electrons, which can be detected in the energy filtered images. We have explored the potential of QuEST for determining the organization of DNA and proteins in cell nuclei. Previously, 3-D ultrastructure of the nucleus has mainly been derived from imaging samples stained with extrinsic heavy metals. Use of energy filtering for mapping phosphorus in cells has been limited to 2-D [9]. We have demonstrated that QuEST reveals the 3-D distributions of nucleic acid within the nucleus. For example, we have been able to quantify the phosphorus content of individual ribosomes, which contain RNA, and the cell nucleus, which contains DNA in the form of chromatin arranged in 10 nm or 30 nm fibers [21]. The ribosomes (blue-green) and nuclear

Fig. 3 Quantitative analysis of 3D phosphorus distribution: SIRT algorithm was used to reconstruct the EFTEM tomographic tilt series and the 3-D volume was rendered with Amira software. Ribosomes (colored bluegreen) exhibit contrast from the phosphorus in their RNA and chromatin (colored orange) from their DNA chromatin (orange) in a Drosophila larval cell are shown in Figure 3. Simultaneous iterative reconstruction technique (SIRT) algorithm was used to reconstruct EFTEM tomographic data, since it allows preservation of numerical values associated with the densities of elements present. Our quantitative analysis showed that ribosomes contain 8000 ≤ 2000 P atoms, in agreement with the known value of around 7000 P atoms. The density of phosphorus was 0.7±0.2 atoms per nm3, which is consistent with a model of tightly packed 30 nm chromatin fibers. In this way it was possible to visualize and quantify phosphorus distribution in 3D within a sectioned eukaryotic cell. It is also feasible to image other biological elements, e.g., iron and nitrogen, in 3D. We also considered some important practical limitations of the technique including: (1) precision in extracting the phosphorus signal, (2) detection limits and (3) effects of damage when the specimen is irradiated with 300 kV electrons.

IFMBE Proceedings Vol. 32

360

M.A. Aronova et al.

ACKNOWLEDGMENT This work was supported by the intramural research program of the NIH.

REFERENCES 1. Jung C W, Jacobs P (1995) Physical and chemical properties of superparamagnetic iron oxide MR contrast agents: Ferumoxides, Ferumoxtran, ferumoxsil. Magn. Res Imag 13 (5): 661-674 2. Svenson S, Tomalia D A (2005) Dendrimers in biomedical applications - reflections on the field. Adv Drug Deliv Rev 57 (15): 21062129 3. Cherry S R (2004) In vivo molecular and genomic imaging: new challenges for imaging physic. Phys Med Biol 49:R13–R48 4. McIntosh R et al., Tr Cell Biol 15:43 5. Grünewald K, Desai P, Winkler D C, Heymann J B, Belnap D M, Baumeister W, Steven A C (2003) Three-dimensional structure of herpes simplex virus from cryo-electron tomography. Science, 302:1396-1398 6. Grünewald K, Medalia O, Gross A, Steven A C, Baumeister W (2003) Prospects of electron cryotomography to visualize macromolecular complexes inside cellular compartments: implications of crowding. Biophys Chem 100:577-591 7. Reimer L (1995) Energy-Filtering Transmission Electron Microscopy. Spring er, Berlin 8. Egerton R F (2003) New techniques in electron energy-loss spectroscopy and energy-filtered imaging. Micron 34:127-139 9. Krivanek, O, Friedman S, Gubbens A, Kraus B (1995) An imaging filter for biological applications. Ultramic 59:267–282 10. Hofer F, Warbichler P, (2004). Elemental mapping using energy filtered imaging. In: Ahn, C. (Ed.), Transmission Electron Energy Loss Spectrometry in Materials Science and the EELS Atlas, second ed. Wiley-VCH, Berlin 11. König P, Braunfeld M B, Sedat J W, Agard D A (2007) The three dimensional structure of in vitro reconstituted Xenopus laevis chromosomes by EM tomography. Chromosoma DOI:10.1007/s00412-0070101-0)

12. Goping G, Pollard H B, Srivastava M, Leapman R (2003) Mapping protein expression in mouse pancreatic islets by immunolabeling and electron energy loss spectrum-imaging. Microsc Res Tech 61:448– 456 13. Bazett-Jones D P, Hendzel M J, Kruhlak, M J (1999) Stoichiometric analysis of protein- and nucleic acid-based structures in the cell nucleus. Micron 30:151–157 14. Leapman R D, Jarnik M, Steven A C (1997) Spatial distributions of sulfur-rich proteins in cornifying epithelia. J Struct Biol 120:168–179 15. Korn A, Spitnik-Elson P, Elson D, Ottensmeyer F P (1983). Specific visualization of ribosomal RNA in the intact ribosome by electronspectroscopic imaging. Eur J Cell Biol 31:334–340 16. Ottensmeyer F P (1984) Electron spectroscopic imaging: parallel energy filtering and microanalysis in the fixed-beam electron microscope. J Ultrastruct Res 88:121–134 17. Ottensmeyer F P (1984) Electron spectroscopic imaging: parallel energy filtering and microanalysis in the fixed-beam electron microscope. J Ultrastruct Res 88:121–134 18. Frank J (1992) Electron Tomography: Three-dimensional Imaging with the Transmission Electron Microscope. Plenum Press,New York 19. Mastronarde D N (1997) Dual axis tomography: an approach with alignment methods that preserve resolution. J Struct Biol 120:343-352 20. Midgley P A, Weyland M (2003) 3D electron microscopy in the physical sciences: The development of Z-contrast and EFTEM tomography. Ultramic 96 (3-4): 413-431 21. Aronova M A, Kim Y C, Harmon R, Sousa A A, Zhang G, Leapman R D (2007) Three-dimensional elemental mapping of phosphorus by quantitative electron spectroscopic tomography (QuEST). J Struc Biol 160:35–48

Author: Maria A. Aronova Institute: National Institutes of Health/National Institute of Biomedical Imaging and Bioengineering Street: 9000 Rockville Pike, Bldg 13/3N17 City: Bethesda, MD Country: USA Email: [emailprotected]

IFMBE Proceedings Vol. 32

Sparse Representation and variational Methods in Retinal Image rocessing J. Dobrosotskaya,1 , M. Ehler1,2 , E. King1,2 , R. Bonner2 , and W. Czaja1 Norbert Wiener Center for Harmonic Analysis and Applications , Department of Mathematics University of Maryland , College Park, MD 20742 2 National Institutes of Health, Eunice Kennedy Shriver National Institute of Child Health and Human Development, PPB/LIMB/SMB, Bethesda, MD, 20892 1

Abstract- Relations between different types of cameras used for retinal imaging were studied with the purpose of improving the quantitative precision of the imaging data (used for diagnostics and medical research). Based on the differences in visual quality and quantitative parameters, we designed analytical models of the effects that cameras introduce into the retinal data and described possible ways of digital post-processing. Some processing tasks involve detection and separation of features (such as the retinal microvessels) prior to subsequent analysis of underlying retinal pathology. Mathematical techniques for feature detection and inpainting are variational, implemented via numerically stable gradient descent schemes. Other tasks involve the estimates of translation - invariant sparse image coefficients allowing to separate the background and significant scales of the image from the texture-like auxiliary information. The above techniques are based on the recent work on the wavelet Ginzburg-Landau energy and methods of adaptive thresholding of the stationary wavelet transform coefficients. We consider algorithms with partial specialist supervision and deliberate choice of processing methods for different eye areas as well as separate processing of healthy vs. pathological eye data. Keywords- Retinal imaging, variational method, edge detection, wavelet.

I. Introduction Ophthalmologists often rely on retinal imaging to diagnose, detect, and follow disease progression. Classifying early stages of age-related macular degeneration (AMD), for instance, relies on qualitative and quantitative analyses of the data from the confocal scanning laser ophthalmoscope (cSLO) and standard fundus camera images [13, 7, 9]. Decrease in macular pigment has been identified as a risk factor for AMD, and observing its distribution over time would allow to make further conclusions about natural and pathological dynamics of macular pigment changes. However, due to inter and cross modality variations, better quantitative measurements are still needed [12]. Macular pigment measurements based on twowavelength autofluorescence images have been introduced

by Delori et al. [8]. To compute the macular pigment map, we either pair a blue cSLO image (488nm excitation, > 500nm emission) with a yellow standard fundus image (520-600nm excitation, > 600nm emission) or, if available, we pair blue (460 − 500nm excitation) and yellow standard fundus images [9]. While image artifacts and background components cancel out (to some extent) in inter-modality pairings, they are emphasized in crossmodality pairings and introduce significant errors in the macular pigment maps. To trace the dynamics of macular pigment and other chromophore changes in retrospective studies, one must compare cSLO autofluorescence with standard fundus autofluorescence, and quantitative measurements require image pre-processing to reduce image artifacts, non-uniform illumination profiles, and contrast differences between modalities. This paper addresses two retinal image analysis problems. First, we correct autofluorescence images from cross-modalities (cSLO, standard fundus camera) to be used in the same two-wavelength computations of macular pigment maps. Blood vessels are detected and masked to facilitate quantitative analysis. Secondly, we extract a binary map of the retinal vascular system from cSLO images. Those two techniques share the common feature of using the stationary wavelet transform for translationinvariant operations. Since the microvascular system in the image is relatively contrasting, some special features of the wavelet decomposition of almost-binary images provide the apparatus for the detection of the blood vessels.

II. Adaptive translation invariant wavelet thresholding First, we will introduce the adaptive thresholding technique that was designed in the attempt to perform minimal changes needed to obtain a more reliable pigment map using images from different digital sources. The need for non-uniform, “relative” thresholding arises from the necessity to automatize the procedure, as well as the need to use different thresholds for edges in separate directions. We will use the image decomposition via the translation invariant (stationary) wavelet transform to perform the adaptive corrections. The involved wavelet function ψ is assumed to be sufficiently regular and compactly supported. The respective 2D wavelet basis is assumed to be

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 361–364, 2010. www.springerlink.com

J. Dobrosotskaya et al.

362

separable, so that the wavelet coefficients can be catego- wavelet thresholding procedure defined above in order to rized into directions H,V and D (horizontal, vertical and compensate for the contrast differences specifically near diagonal respectively). the blood vessels (Fig. 1(c)). The result of the MPM computation without any additional correction is shown Relative thresholding in Besov spaces If we consider a characteristic function of some mea- in Fig.2(a), the result of the MPM computation using surable set u = χE ∈ L2 (R2 ), then, due to ψ having a thresholded cSLO image is shown in Fig.2(b). One can compact support and χE being locally hom*ogeneous of see that while unevenness of the MPM originating from degree 0, we get: sources other than blood vessels was retained, the vesselrelated artifacts that are visible Fig.2(a) are almost comu, ψj,k 2p u, ψj+p,2p k . The decrease in the coefficient values as the scale in- pletely eliminated. creases and the respective increase of the integration domain for the translation parameter k makes the the standard deviation change as O(2−2j ) while the scale j increases. If the boundary of the set E is piecewise smooth one can expect the coefficients of the wavelet modes supported within the same distance form the boundary to (a) (b) (c) be of the same order of magnitude. Fig 1 Images: (a) blue cSLO; (b) yellow fundus; (c) corrected blue cSLO. Let us define the thresholding rule for each scale j of the wavelet decomposition. In order to do that, consider the expected value(the mean) Mj and the Dj for the set of all wavelet coefficients within one fixed direction cj,k = ψj,k , u at a fixed scale j ∈ N: −N j

Mj = 2

J 2 −1

cj,k

ki =0

Dj2

−N j

=2

J −1 2

ki =0

−N j

(cj,k − Mj ) = 2 2

J −1 2

c2j,k − Mj2

(a)

Fig 2

(a) MPM computed without correction, (b) MPM computed using thresholded blue cSLO image.

ki =0

We define the relative significance threshold at scale j as τj = C22j Dj , C = 2−2Jmax where Jmax , is the the thresholding scale, which is defined as the maximum level of wavelet decomposition J (i.e. the image resolution ) or as the scale that stores the most significant information (visually significant, or defined by a specific application or the given data quality). In this manner, we define the following unified criterium S for the relative wavelet thresholding. A mode ψj,k is chosen to be relatively significant for a function u within a chosen direction (H,V, or D), i.e. bu (j, k) = 1 if and only if it differs from the mean coefficient value at the scale j by more than the standard deviation times the dyadic scaling multiple: |u, ψj,k − Mj | ≥ C22j Dj . Thus, the relative thresholding leaves intact those coefficients that differ sufficiently (as much as in the binary image case) from the mean of all coefficients at this wavelet scale.

III. Semi-supervised contour detection using the wavelet Ginzburg-Landau energy

The wavelet Ginzburg-Landau energy(WGL), introduced in [3, 4] was shown effective in variational methods for various imaging problems. Here we describe one of its applications which is essentially a variational method for detecting of the microvascular elements within retinal images. As was mentioned before, blood vessels are a completely different structure for the purposes of the retinal image analysis, and, in particular, should be excluded from the computation of the macular pigment map in order to improve the precision of the latter. Here, once again, we will use the assumption that those are of sufficient contrast with respect to the background, i.e. can be treated as almost-binary elements of the image. Semisupervision required here involves the manual choice of several pixel areas that belong to the blood vessels prior to automatized computations that detect the rest of the microvascular system. Wavelet Ginzburg-Landau energy

Fourier analysis provides many elegant approaches to difNumerical tests ferential operators and related tools in PDE-based imFig. 1-2 show the results of the numerical tests that were age processing. In the design of the wavelet Ginzburgperformed using a cSLO and a yellow standard fundus Landau energy, a more localized basis than the Fourier image. The cSLO image was modified via the adaptive one was used in the context of variational methods based IFMBE Proceedings Vol. 32

Sparse Representation and Variational Methods in Retinal Image Processing

on diffuse interfaces([5] and more). Such construction is naturally consonant with image processing applications involving binary images and treat(recover) respective binary values as two equilibria of some system [1],[6]. WGL originated from the idea of designing new types of pseudo-differential energy functionals that inherit important properties of the ones involving derivatives, but leave out the computational drawbacks associated with the discrete differentiation. The key idea in [3] combined the basic geometric framework of diffuse interface methods with advantages of the well-localized and inherently multiscale wavelet operators. The Total Variation (TV) seminorm was proven to be a natural and efficient measure of image regularity [2],[11]. To avoid computational challenges related to equations for minimizers of this norm, one can reformulate the problem using the phase-field method and approximate the TV functional (in the Γ sense). The Ginzburg-Landau (GL) functional, 1 2 GL(u) = |∇u(x)| dx + W (u)dx, 2 4

363

for the same values of the interface parameters comparing to the classical GL energy. WGL minimization can be performed via the gradient descent method. The latter problem is equivalent to solving the following ODE with a sufficiently regular initial condition u(x, 0) = u0 (x): 1 ut = Δw u − W (u) (GD) The above problem is well-posed: it has a unique solution that exists globally in time and converges to a steady state as t → ∞. The steady state solution is infinitely smooth provided that wavelet ψ used in the construction of the energy has sufficient regularity. Modified WGL in the variational formulation of the segmentation problem.

Our segmentation model involves minimizing the sum of 1 WGL (as a regularizer) and the L2 spatial and the B2,2 edge-preserving forcing terms. μw μs E(u) = W GL(u)+ (u−f )χΩ 2L2 + |P rΛ (u−uorig )|2B , 2 2 (M W GL) χΩ and χΛ are masks in the spatial and wavelet domains respectively, and μs and μw are corresponding weights. Ω is assumed to be the manually preclassified part of the image, Λ - the set of wavelet modes that need to be preserved close to the original image. Function f assumes value 1 at the non-vessel pixels and 0 at the pixels within the blood vessels in the image, uorig denotes the original image rescaled to the range [0, 1]. The set Λ of “relatively” significant modes is obtained by the adaptive thresholding method described earlier. The gradient descent equation for this modified WGL energy takes the form 1 ut = Δw u− W (u)−μs (u−f )χΩ −μw Δw P rΛ (u−uorig ) The initial guess used for numerical simulations may be chosen to be equal to the given image except for black and white values at the preclassified areas:

W (u) = u2 (u − 1)2 is a diffuse interface approximation to the Total Varia tion functional |∇u(x)|dx in the case of binary images [10]. GL energy is used in modeling of a vast variety of phenomena including the second-order phase transitions. However, if used in signal processing applications, diffuse interface models tend to produce results that are oversmoothed comparing to the optimal output. In the new model the H 1 seminorm |∇u(x)|2 dx is replaced with a Besov seminorm (or Besov-type seminorm defined using 0-regular wavelets). This allows to construct a method with properties similar to those of the PDE-based methods but without as much diffuse interface scale blur. The “wavelet Laplace operator ” was defined by having the wavelet basis functions as eigenfunctions, and acting on those in the same “scale - proportional” manner as the Laplace operator acts on the Fourier basis. Given an orthonormal wavelet ψ the “wavelet Laplacian” of any u(x, 0) = u0 (x) = uorig χΩc (x) + f χΩc (x) u ∈ L2 (R) is formally defned as +∞ Numerical simulations were performed using discrete Δw u = − 22j f, ψj,κ ψj,κ dκ, ψj,κ = 2j ψ(2j x − κ). gradient-stable semi-implicit schemes. Indeed, Δ is a w j=0 diagonal operator in the wavelet basis, but the presence Then the “wavelet Allen-Cahn” equation ut = Δw u − of nonlinearity does not allow to make it fully implicit. 1 W (u) describes the gradient descent in the problem The gradient stability is achieved by the convexity splitof minimizing the Wavelet Ginzburg-Landau (WGL) en- ting method described in [3]. The computational speed of ergy: WGL-based algorithms is mostly defined by the choice of 1 W GL(u) := |u|2B + W (u)dx, the translation-invariant discrete wavelet transform. The 2 4 stationary wavelet transform (SWT) matches the model +∞ 2j 2 2 |f, ψj,κ | dκ 2 (3) perfectly, however, it requires more operations than FFT |u|B = 2 j=0 that is used within related PDE-based methods. The fact is the square of the Besov 1-2-2 (translation-invariant) that the SWT is relatively slow in comparison with the FFT is compensated by WGL-based methods requiring semi-norm if the wavelet ψ is r-regular, r ≥ 2. WGL functionals are inherently multiscale and take fewer iterations to converge. Thus, the pseudo-differential advantage of simultaneous space and frequency localiza- method is comparable to or outperforms the PDE methtion, thus allowing much sharper minimizer transitions ods in terms of the CPU time. IFMBE Proceedings Vol. 32

364

J. Dobrosotskaya et al.

References

Numerical tests

The test was performed on an average of several cSLO images - Fig. 3. Depending on the combination of parameters and μ which define the importance of the output being binary and having the same set of edges respectively, the results vary in the level of detalization and sharpness of the vessel/non-vessel classification -Fig. 3(c).

(a)

(b)

(c)

(c)

Fig 3

(a) Initial image, (b) the set of edges that need to be preserved (denoted f in the algorithm), (c) the resulting maps of detected blood vessels.

IV. Conclusions The authors addressed some questions related to the retinal image processing, in particular - to the computation of the macular pigment map. A successful method of correction of autofluorescence images from cross-modalities (cSLO, standard fundus camera) allowing to use those in the same two-wavelength computations of macular pigment maps was introduced along with a variational technique for extraction of a binary map of the retinal vascular system from cSLO images. The latter can be improved by designing an explicit, non-iterative way of finding or approximating solutions of the described variational problem and, thus, decreasing the computational time. This issue is one of the aspects of the authors’ work in progress. Acknowledgements The research was funded by the Intramural Research Program of NICHD/NIH, by NSF (CBET0854233), by NGA (HM15820810009), and by ONR (N000140910144). The authors are grateful to Professors John J. Benedetto and Andrea L. Bertozzi for many insightful discussions and their long-term support.

[1] A. Bertozzi, S. Esedoglu, and A. Gillette. Analysis of a two-scale Cahn-Hilliard model for image inpainting. Multiscale Modeling and Simulation, 6(3):913– 936, 2007. [2] A. Chambolle and P.-L. Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188, April 1997. [3] J. Dobrosotskaya and A. Bertozzi. A WaveletLaplace variational technique for image deconvolution and inpainting. ”IEEE Transactions on Image Processing”, 17(5), 2008. [4] J. Dobrosotskaya and A. Bertozzi. Wavelet Ginzburg-Landau energy in the edge-preserving variational techniques of image processing. To be submitted to SIAM Jour. or Appl. Analysis, March 2010. [5] S. Esedoglu and J. Shen. Digital inpainting based on the Mumford-Shah-Euler image model. Euro. Jnl of Applied Mathematics, 13:353–370, 2002. [6] Selim Esedoglu. ”Blind Deconvolution of Bar Code Signals”. Inverse Problems, (20):121–135, 2004. [7] A.C.Bird et al. An international classification and grading system for age-related maculopathy and age-related macular degeneration. The International ARM Epidemiological Study Group. Surv Ophthalmol, 5(39):367–374, 1995. [8] F.C.Delori et al. Macular pigment density measured by autofluorescence spectrometry: comparison with reflectometry and heterochromatic flicker photometry. [9] M.Ehler et al. High-resolution autofluorescence imaging for mapping molecular processes within the human retina. UMD, 2010. SBEC. [10] G. Dal Maso. An introduction to Gamma convergence. Progress in nonlinear differential equations and their applications. Birkhauser Boston, Inc., Boston, MA, 1993. [11] L. I. Rudin, S. Osher, and E. Fatemi. ”Nonlinear Total Variation based noise removal algorithms”. Physica D., 60:259–268, 1992. [12] S.Beatty, F.J. Van Kuijk, and U. Chakravarthy. Macular pigment and age-related macular degeneration: longitudinal data and better techniques of measurement are needed. Invest Ophthalmol Vis Sci, 3(49), 2008. [13] S.M.Meyers, M.A. Ostrovsky, and R.F. Bonner. A model of spectral filtering to reduce photochemical damage in age-related macular degeneration. Trans Am Ophthalmol Soc, (102):83–93, 2004.

IFMBE Proceedings Vol. 32

Optimization and Validation of a Biomechanical Model for Analyzing Running-Specific Prostheses Brian S. Baum1, Roozbeh Borjian1, You-Sin Kim1, Alison Linberg1, and Jae Kun Shim1,2,3 1 Department of Kinesiology, University of Maryland, College Park, MD USA Department of Bioengineering, University of Maryland, College Park, MD USA 3 Neuroscience and Cognitive Science (NACS) Graduate Program, University of Maryland, College Park, MD USA 2

Abstract— Modeling the ankle joint during amputee locomotion is difficult since a definitive joint axis may not exist. Gait analysis estimates joint center positions and defines body segment motions by placing reflective markers on anatomical landmarks. Inverse dynamics techniques then estimate joint kinetics (forces and moments) and mechanical energy expenditure using data from ground reaction forces (GRFs) and the most distal joint (usually the ankle) to make calculations for proximal joints. Running-specific prostheses (RSPs) resemble a “C” or “L” shape rather than the human foot. This allows RSPs to flex and return more propulsive energy, like a spring, but no “ankle” exists. Current biomechanical models assume such a joint exists by placing markers arbitrarily on the RSP (e.g. the most acute point on the prosthesis curvature). These models are not validated and may produce large errors since inverse dynamics assumes rigid segments between markers but RSPs are designed to flex. Moreover, small errors in distal joint kinetics calculations will propagate up the chain and inflate errors at proximal joints. This study develops and validates a model for gait analysis with RSPs. Reflective markers were placed 1 cm apart along the lateral aspects of five different RSPs. Prostheses were aligned in a material testing system between two load cells. Forces simulating peak running loads were applied and the load cells measured forces and moments at the top (applied force) and bottom (GRF) of the prostheses. Inverse dynamics estimated force transfers from the bottom to top of the prostheses through the defined segments. Differences between estimated and applied values at the top are considered model error. Error will be calculated for every possible combination of markers to determine the minimal marker set with an “acceptable” level of error. The results yield a model that can be confidently used during gait analyses with RSPs. Keywords—Kinetics, Amputee, Amputation, Prosthesis.

I. INTRODUCTION Modeling the lower extremity joints, and specifically the ankle joint, proves to be continual source of difficulty and remains as an inherent problem analyzing locomotion (walking and running) of individuals with lower extremity amputations (ILEA). Many of today’s commonly prescribed

prosthetic foot designs are either energy storage and return (ESAR) or dynamic response feet, which have a resemblance to an intact foot. During a three-dimensional gait analysis, reflective markers are placed on anatomical landmarks to estimate the positions of joint centers and to define the body segment motions. Researchers will often treat current prostheses like an intact limb and label the relative location of the landmarks on the prosthesis. In biomechanics of human locomotion, identifying the ankle joint is one of the most important tasks because the calculations of joint kinetics (forces and torques) and joint mechanical energy expenditure start from the ankle joint. A small joint position error at the ankle can easily propagate up the chain to the knee, hip, and beyond producing greater errors for the joint kinetics calculations in these more proximal joints. In previous amputee locomotion studies, markers defining the ankle joint axis are often affixed to spots on the prosthetic foot mimicking the marker placement on the intact foot and ankle complex. With the development of running specific prostheses, new prosthetic foot designs have emerged that no longer resemble the human foot. Many of the designs resemble a “C” or “L” shape at the distal end of the limb, which allows the prosthesis to flex and return more energy for propulsion during running, similar to a spring. These designs do not have a typical ankle joint (Fig. 1a-b); however, similar methods of biomechanical analyses have been employed to analyze these prostheses as have been used in ESAR and dynamic response prosthetic feet, traditional prosthetic feet, and the intact limb. Studies investigating running with these devices have estimated the prosthetic limb ankle joint to be either at the same relative position as the intact limb’s ankle joint or the most acute point on the prosthesis curvature (i.e., the greatest curvature; see Fig. 1a-b) [1-3]. These estimations have not been validated and potentially result in large errors in the kinetic calculations and subsequent interpretations of results. Consequently, improved and validated modeling techniques are needed to estimate accurate centers of rotation for running prostheses that can be applied to multiple prosthetic designs, and be utilized in those with bilateral lower extremity amputations where an

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 365–367, 2010. www.springerlink.com

366

B.S. Baum et al.

6 DOFFP

a

b 6 DOFFP

Fig. 1 a-b. Literature has reported marker placement for running prostheses (a) placed at the height of the intact limb’s lateral malleolus or (b) the point at which the radius of the prosthesis is most acute

intact ankle joint is not available for reference. An accurate model will provide data that can be interpreted with confidence and is needed to produce biomechanical and physiological data necessary to identify optimal running techniques, prosthetic alignment, prosthetic designs, training regimens, and energy efficiency. Without understanding the biomechanical and physiological consequences of exercise after amputation, clinicians will have difficulty in prescribing appropriate prostheses and exercise regimes to people with a lower extremity amputation with and without diseases such as diabetes, high blood pressure, and obesity. The aim of this experiment is to develop and validate a model with unique optimal marker placements for specific running prosthesis designs and to determine the resultant optimal marker placement for all tested running prosthetic designs.

II. METHODS A biomechanical model is being developed through motion analysis of running-specific prostheses in a material testing system (MTS, Eden Prairie, MN). Four runningspecific prosthesis designs are being tested for this project including the 1E90 Sprinter (OttoBock Inc.), Flex-Run (Ossur), Cheetah® (Ossur) and Nitro Running Foot (Freedom Innovations). These prostheses were chosen because they are the most commonly prescribed runningspecific prostheses on the market. Each prosthesis was placed in the MTS between two load cells (Bertec PY6, Columbus, OH) in a neutral alignment (Fig. 2). Neutral alignment was defined according to the specific

Fig. 2 Approximate marker placement on running prosthesis and position in MTS machine between two 6-DOF forceplates (FP). Fewer markers than actual are shown in this illustration for clarity

manufacturers’ recommendations for prosthesis alignment. The load cells captured data at 1,000 Hz. Forces up to 2,300 N were applied to simulate peak vertical forces commonly observed during running (approximately three times the body weight of a 75 kg person), and the load cells measured the force and moment at the head (applied) and toe (simulates ground reaction force and moment) of each prosthesis. Reflective markers were placed at 1 cm intervals along the lateral aspect of the keel of each runningspecific prosthesis (see Fig. 2). Reflective markers were also placed orthogonally on the anterior, lateral, and medial aspect of the “head” of the prosthesis, at the point of connection to the socket or pylon, in order to define the local coordinate system of the prosthesis. A 6-camera motion capture system (Vicon, Oxford, UK) with a capture frequency of 500 Hz was used to collect the 3-D positional data of the markers during each trial. Two consecutive markers defined individual segments of the prosthesis (assumed to be rigid) and consecutive segments shared a common marker. The joint between these segments was assumed as a hinge joint. Standard inverse dynamics calculations were made to estimate the force and torque transfer from the base of the prosthesis to the head, through the defined prosthesis segments. The difference between force and moment values at the head of the prosthesis from the estimated inverse dynamics calculations and the directly measured values from the top load cell is considered model error. Force and moment estimations will be made with every combination of markers giving a resultant error value for each combination. These error values will be analyzed to determine an “acceptable” level of error for a minimal

IFMBE Proceedings Vol. 32

Optimization and Validation of a Biomechanical Model for Analyzing Running-Specific Prostheses

marker set that can be used by most motion capture laboratories. Less than 5% error from the peak force and moment values will be considered acceptable.

III. RESULTS Complete results are not yet available as testing is still in progress. Early preliminary data suggest that fewer than eight markers will be necessary to accurately (less than 5% error) estimate force and moment transfer through runningspecific prostheses using inverse dynamics equations.

IV. DISCUSSION A majority of motion capture laboratories have a limited number of cameras and may have difficulty tracking a large number of markers close together during activities such as running. Determining a minimal marker set for runningspecific prostheses is important to ensure widespread use of such a model regardless of the number of cameras available to a laboratory. Moreover, fewer markers on a prosthesis makes setup less tedious and saves testing time. Optimal marker sets (the fewest number of markers yielding acceptable error) will be identified for each running-specific prosthesis design and an attempt will be made to identify an overall optimal marker set that yields the smallest summation of error for all designs. The development and validation of an accurate biomechanical model for use with running-specific prostheses will allow researchers to fully examine the kinematic and kinetic adaptations that occur during running in ILEA. Virtually no information is available in the literature to guide clinicians in aligning, prescribing, or rehabilitating ILEA who wish to run. It is currently unknown whether running with running-specific prostheses poses an increased risk for injury in the residual limb joints or joints in the contralateral limb. ILEA are already at greater risk of degenerative joint diseases such as osteoarthritis (OA), and the larger forces generated during running could promote the development and progression of these diseases. Prior research supports that OA may initiate in joints that experience a traumatic or chronic event (such as amputation due to injury or disease) that causes kinematic changes [4]. The rate of OA progression is currently thought to be associated with increased loads during ambulation [4, 5]. Identifying running techniques, prosthetic alignments, or new prosthetic designs that reduce peak lower extremity joint loading may reduce the risk of developing and progressing OA.

367

Additional research needs include investigating the effects of various prosthetic components in meeting different running goals, investigating the effects of variations in prosthetic alignment that minimize asymmetries and maximize energy efficiency during running.

V. CONCLUSIONS A validated biomechanical model is necessary to aid in our analysis and knowledge of the effects of using runningspecific prostheses. Development of this model will allow researchers to systematically analyze the kinematic and kinetic adaptations of individuals with lower extremity amputations during running. This information will lead to improved prosthetic prescription and alignment, rehabilitation techniques, and prosthetic designs that will improve performance and reduce risks for injury and disease.

ACKNOWLEDGMENTS This research was funded by the University of Maryland’s Department of Kinesiology Graduate Research Initiative Fund.

REFERENCES 1. Buckley JG (2000) Biomechanical adaptations of transtibial amputee sprinting in athletes using dedicated prostheses. Clin Biomech 15: 352358 2. Buckley JG (1999) Sprint kinematics of athletes with lower-limb amputations. Arch Phys Med Rehabil 80: 501-508 3. Burkett B, Smeathers J, Barker T (2003) Walking and running interlimb asymmetry for Paralympic trans-femoral amputees, a biomechanical analysis. Prosthet Orthot Int 27: 36-47 4. Andriacchi TP, Mundermann A (2006) the role of ambulatory mechanics in the initiation and progression of knee osteoarthritis. Curr Opin Rheumatol 18: 514-518 5. Andriacchi TP, Koo S, Scanlan SF (2009) Gait mechanics influence healthy cartilage morphology and osteoarthritis of the knee. J Bone Joint Surg Am 91Suppl 1: 95-101

Corresponding author: Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Brian S. Baum University of Maryland, College Park Department of Kinesiology College Park USA [emailprotected]

Prehension Synergy: Use of Mechanical Advantage during Multi-finger Torque Production on Mechanically Fixed- and Free-Object Jaebum Park1,2, You-Sin Kim2, Brian S. Baum2, Yoon Hyuk Kim5, and Jae Kun Shim2,3,4,5 1

Department of Kinesiology, The Pennsylvania State University, University Park, USA 2 Department of Kinesiology, University of Maryland, College Park, USA 3 Department of Bioengineering, University of Maryland, College Park, MD 20742 4 Neuroscience and Cognitive Science (NACS) Graduate Program, University of Maryland, College Park, MD 20742 5 Department of Mechanical Engineering, Kyung Hee University, Global Campus, Korea 130-701 Abstract— The aim of this study was to test the mechanical advantage (MA) hypothesis, greater involvements of effectors with longer moment arms, in multi-finger torque production tasks in humans. Seventeen right-handed subjects held a customized rectangular handle and produced prescribed torques to the handle, and the forces from all five digits were recorded. There were eight experimental conditions: two prehension types under different sets of mechanical constraints (i.e., fixed-object prehension and free-object prehension) with two torque directions (i.e., supination and pronation) and two torque magnitudes: (i.e., 0.24 and 0.48 Nm). The subjects were asked to produce prescribed torques during the fixed-object prehension or to maintain constant position of the free handheld object that required the same magnitude and direction of torques as the fixed-object prehension. The index of MA was calculated for agonist and antagonist fingers, which produce torques, respectively, in the same direction and opposite direction to the assigned torques. Agonist fingers showed that the fingers with longer moment arms produced greater grasping forces while antagonist fingers showed that that the fingers with shorter moment arms produced greater grasping forces. These results support the MA hypothesis. The MA index was greater in fixed-object condition as compared to free-object condition. The MA index was greater in pronation condition than supination. We concluded that the central nervous system utilizes the MA of fingers during multi-finger torque production tasks. Keywords— Prehension, mechanical advantage, torque production.

I. INTRODUCTION When the human motor system involves redundant motor effectors for a specific motor task, the central nervous system needs to provide a solution for the motor task by determining the involvements of multiple effectors. Specifically, when the motor task involves a production of a torque using multiple effectors that are aligned parallel and contributing to the torque [1], the central nervous system may consider the mechanical advantage (MA) of effectors as a solution to the redundant motor system. Previous

studies showed that effectors with greater MA are associated with greater involvements in muscle activation patterns [2] and multi-digit grasping tasks [3]. The MAs of individual effectors in the system are mainly determined by their anatomical structures, such as the origin and insertion of individual muscles and parallel finger connections. Eventually, the use of effectors with greater MA would be an effective way to perform the tasks, minimizing the total “effort” (e.g., total force used for the task). Previous studies suggested that the central nervous system (CNS) utilized the MA of fingers during torque production tasks [3]. According to the MA hypothesis, the fingers positioned further away from the axis of rotation have greater MA due to their longer moment arms. The force production of lateral fingers (i.e., index and little fingers) would be a more effective way of producing moments as compared to the force production of the central fingers due to the longer moment arms of lateral fingers. The selections of individual finger forces/moments are partially governed by the controller’s specific principle. Thus, utilizing MA of various fingers in multi-finger torque production tasks can be the controller’s specific strategy to control the kinetically redundant hand-finger system. Recognizing such a pattern may be a way to minimize the total finger forces in torque production. However, this would only be true when the fingers act as the moment agonist, when the effectors produce the moment of force in the required directions. Actions of individual fingers are not independent because of the inter-dependent muscle-tendon connections of fingers [4] and common inputs to the same finger muscles [5]. Thus, a voluntary movement or force production by one finger is often accompanied by involuntary movements or forces by other fingers [6]. The CNS might produce a smaller finger force with a longer moment arm, where the fingers produce moments of force opposite to the required direction (i.e., antagonist moment). In this study we employed a free-object and a mechanically fixed-object in static prehension in order to investigate the effect of static constraints during static

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 368–371, 2010. www.springerlink.com

Prehension Synergy: Use of Mechanical Advantage during Multi-finger Torque Production

prehension and how the CNS controls digits’ forces and moments against static constraints. The following two general hypotheses were tested in this study: 1) The MA of fingers is utilized by the CNS in both agonist fingers and antagonist fingers. 2) Mechanical constraints (fixed- vs. free-object prehensions) are considered by the CNS for the utilization of the MA of fingers in both agonist fingers and antagonist fingers.

II. METHOD Seventeen right-handed male volunteers (age: 29 ± 3.1 years, weight: 67.1 ± 2.9 kg, height: 174.2 ± 5.3 cm, hand length: 18.7 ± 2.5 cm, and hand width: 8.7 ± 0.9 cm) were recruited in the current study. Before testing, the experimental procedures of the study were explained to the subjects and the subjects signed a consent form approved by the University of Maryland’s Institutional Review Board (IRB). Five six-component (three force and three moment components) transducers (Nano-17s, ATI Industrial Automation, Garner, NC, USA) were attached to an aluminum handle (Fig.1) in order to measure each digit’s forces and moments. One six-component (three position and three angle components) magnetic tracking sensor (Polhemus LIBERTY, Rockwell Collins Co., Colchester, VT, USA) was mounted to the top of the aluminum handle in order to provide feedback of the linear or angular positions of the handle during the free-object prehension task (Fig. 1). The thumb sensor was positioned at the midpoint between the middle and ring finger sensors in the vertical direction. In addition, a horizontal aluminum beam (32cm in length) was attached to the bottom of the handle in order to hang a load (0.31kg) at different positions along the beam so as to provide different external torques for the freeobject condition. The sampling frequency was set at 50 Hz. The experiment consisted of two sessions. In the first session, the subjects performed four single-finger maximal voluntary force (MVF) production tasks (i.e., index, middle, ring, and little fingers) under the fixed object condition. The fingers’ MVFs along the Z-axis (i.e., the direction of grasping force) were measured. The subjects were instructed to keep all digits on the sensors during each task and to pay attention to the task-finger maximal force production. Each subject performed a total of 8 trials: 2 prehension types (fixed and free) × 4 fingers (index, middle, ring, and little) = 8 trials. The second session involved a series of multi-finger torque production tasks under both fixed- and free-object conditions. In this session, there were eight experimental conditions: 2 prehension types × 4 torque conditions about y-axis (supination efforts: -0.48, -0.24 Nm; pronation efforts: 0.24,

369

0.48 Nm). For the fixed-object condition, the handle was mechanically fixed to the vertical aluminum plate (Fig. 1b) so that the handle could not be translated or rotated. The subjects were instructed to produce an assigned torque for 6-s while watching the feedback of torque being produced on a computer screen. For the free-object prehension task, the task for the subjects was to hold the handle while maintaining its pre-set constant linear and angular position of the handle against the given external torques. The subjects were instructed to minimize the angular and linear deviations of the handle from the initial positions. For each condition, twenty five consecutive trials were performed. Thus, each subject performed a total of 200 trials (2 prehenson types × 4 torques × 25 trials = 200 trials) in the second session. Two-minute breaks were given at the end of each trial in order to avoid fatigue effects. The order of experimental conditions was balanced and no subject reported fatigue.

Fig. 1 Schematic

illustration of experimental setup for (a) free-object prehension and (b) fixed-object prehension. Real-time feedback of translation along the z-axis (horizontal translation), translation along the yaxis (vertical translation), and rotation about the x-axis were provided during free-object prehension

Individual fingers were classified into moment agonists and moment antagonists with respect to direction of the moment of finger force [3]. Agonist fingers produce the moment of normal force in the required direction of torque, while antagonist fingers produce the moment of normal

IFMBE Proceedings Vol. 32

370

J. Park et al.

force in a direction opposite to the task torques. Within the moment agonists (or moment antagonists), fingers were further classified into two types of moment agonists (or moment antagonists) based on the lengths of the moment arms of finger grasping forces from the thumb position. The normal forces of fingers with shorter moment arms were designated as F1 while those with longer moment arms were designated as F2. Then, we calculated the ratio of F2 to F1 within each group of moment agonists and moment antagonists to quantify the index of mechanical advantage (Eq. 1 & 2). In addition, F2 and F1 were normalized by corresponding fingers’ maximal voluntary forces (MVF) measured during the first session, and the ratio of normalized F2 to F1 was computed for both the moment agonist and antagonist (Eq. 3 & 4).

MAago = fa*go 2 / fa*go1

(1)

MAant = Fant 2 / Fant 1 norm ago

MA

= ( fa*go 2 / F

max ago 2

pronation efforts, while MAago values were not different between fixed- and free-object conditions. These results were supported by the ANOVA with a significant main effect of DIR [F (1, 16) = 19.93, p 20mm in diameter, be non-magnetic, hom*ogeneous and have a smooth surface that does not have gaps when interfaced to the probe. Each mixture is mixed in about a 500 ml of total volume, and then a small amount is poured into a cylindrical container that is approximately 40 mm in diameter and 25mm deep. Four tissue mixtures were prepared through empirical development; those tissues are fat, average abdomen, average muscle, and an average heart, liver, spleen mixture. The results of those mixtures are shown in table 1. Table 1 Phantom mixture values for conductivity and Tissue Fat Average Muscle Organs Average Abdomen Average

Epsilon 5.459623 52.569 57.0573 43.3249

permittivity

Sigma 0.051398 1.01197 1.371084 0.871451

415

clarity. Finally, the phantom was filled to the top with the final layer simulating muscles of the upper chest back and shoulders. The completed phantom with the PVC esophagus sticking out for use with studies involving the stomach or alternatively as an access port for any type of study that would involve putting an RF circuit in the central portion of the body. The completion of this phantom requires some verification which is demonstrated and discussed in the next section. The phantom is verified through both reflection measurements and is compared against saline and the human body. C. Phantom Results via Reflection Measurements Upon completion of the phantom shown in figure 1, a test was performed to demonstrate the electrical loading of the phantom torso versus a human torso. The test devised to exhibit equivalence between the phantom and the human body was a reflection measurement. An MRI coil that had been tuned and matched to a 90kg human body at 915MHz, which is similar the human body used in the REMCOM model was used. The measurement was made using an HP 4396B vector network analyzer on a human subject, with approximately with a ¼” spacing. A ¼” pad was placed between the subject and the coil to account for the 1/4” wall thickness of the phantom. The identical reflection measurement was then made with the MRI loop placed directly against the tissue equivalent phantom (TEQ). In the Figures below are the results showing very good agreement between the human subject and the TEQ phantom. The performance of the TEQ phantom can be verified via bench measurements and simulation utilizing electromagnetic field simulators.

B. Phantom Construction The next step was the actual phantom construction; an acrylic cylinder was heated and shaped to form an ellipse similar to the torso of the human body with an inner diameter x-axis of 14.5” and a y-axis of 11” with a wall thickness of ¼”. The first layer added to the phantom was a fat layer composed of shortening a semisolid fat typically used in food preparation and is shown against the outer edge of the phantom, held in place by plastic wrap. The next layer added was the lower abdomen, along with a separate stomach compartment made of PVC pipe that can be filled or left as an airspace depending upon the experiment. The next layer to be added was the heart, liver, spleen mixture, which covered over the stomach and lower abdomen was composed of plastic bags doped with Diethylene glycol (DEG), distilled water, NaCl, and TX-151, for IFMBE Proceedings Vol. 32

Fig. 1 Partially filled phantom

416

D.M. Peterson et al.

Figure 2 demonstrates the difference between real human tissue using a vector network analyzer shown in blue on Figure 2 and a crude bulk loading phantom filled with a 1S/m saline solution (average conductivity of most muscle and organ tissue) shown in red on Figure 2. The deep “dip” or reflection (S11), demonstrates good tuning and matching on the human load, when the same antenna is applied to the bulk loading phantom (saline box), the S11 parameters show poor agreement with the actual human load. This can be rectified through better phantom design and is demonstrated by the tissue equivalent phantom used to make the reflection measurements shown in green on Figure 2. This process was then repeated using a folded dipole designed for 915MHz and tuned and matched to the human body, these results are shown in Figure 3.

power was applied to the phantom until a 2ο C temperature rise was obtained. The SAR was calculated and then compared to the simulations for the folded dipole and the MRI coil case, showing agreement. The results of the SAR simulations and heating experiments are shown are shown in Table 2 and Table 3. Table 2 MRI SAR calculations and measurements 45mmMRI coil SAR

REMCOM Body Model 6 W/kg

Tissue Equivalent Phantom 7 W/kg

hom*ogeneous Saline Phantom 2.5 W/kg

Table 3 Folded Dipole SAR calculations and measurements

D. Phantom and Simulation Comparisons The program chosen to do the simulations was REMCOM, which has a human body model and is widely used in industry and academia [11-15]. The first step is to open the program and start a geometry file. In the geometry file it is important to select a cell size, however, having the best resolution results in slow running simulations. It is therefore important to balance the resolution of the mesh to get the required results without taking too much time for the simulation. Once the geometry is drawn, it is important to “pad” the boundaries, in this with air. This allows the program to converge, which brings us to some more parameters options of setting the convergence threshold in REMCOM (-30dB for simulations running in air, -16dB for simulations running with the human body model) and setting the number of time steps, which I set to 10,000 to help the program run faster. Initial studies were done using a λ/2 dipole for approximately 915MHz, center of the ISM band spanning from 902-928MHz. Once the solid geometry is drawn or imported from another computer aided drawing program (CAD), it must be converted to a wire mesh and add sources (passive- capacitors or inductors, active- signal generator). For the standard dipole an active series voltage source; no matching was required and the dipole was connected to the 50 Ohm source. After the setup of a regular dipole, a folded dipole configuration was used with the human body model. E. Correlation with Heat Test Upon confirming that the tissue equivalent was electrically equivalent to the human body, an SAR experiment was performed to compare simulation versus experiment. The SAR experiment made an assumption that the majority of the power transmitted was far-field and utilizing the calorimetric method from IEC 600-601-1, 2W of continuous

Folded Dipole Antenna SAR

REMCOM Body Model 9 W/kg

Tissue Equivalent Phantom 10.4 W/kg

hom*ogeneous Saline Phantom 3.9 W/kg

III. CONCLUSIONS This work demonstrated the need for tissue equivalent phantoms that simulate the electrical properties of a biological system. The work begins with a general discussion about phantoms and their relevance to biomedical applications. This leads into an in depth study of the current standards as applicable to specific absorption rate (SAR) that was later applied to the tissue equivalent phantom. Equipment and procedures for measuring the complex permittivity were discussed. Three distinct types of phantoms for simulating the electrical properties of humans were discussed. The three types of phantoms are a saline phantom, a tissue equivalent phantom and a segmented tissue equivalent phantom. It was shown that the saline phantom breaks down as a good electrical model of the human body at very high frequencies (VHF) and at ultra-high frequencies the parameters are worse with results shown in Figures 4-8. Antenna simulations were performed for two different types of antennas. The first simulation was of a folded dipole at 915MHz, the second simulation was of a 45mm MRI surface coil at 915MHz. These simulations were used to compare simulation against a tissue equivalent and saline phantoms, with the method of comparison being a study of induced SAR. A tissue equivalent phantom was constructed consisting of four combined tissues to accurately act as a human load.

IFMBE Proceedings Vol. 32

A Tissue Equivalent Phantom of the Human Torso for in vivo Biocompatible Communications

417

calculations were within 15% for the folded dipole and within 19% for the MRI coil. Future generations of phantoms should include a lung space; this should help with some of the complex air-tissue interfaces that are in the human body.

REFERENCES

Fig. 2 915MHz MRI coil reflection results

Fig. 3 915MHz Folded dipole reflection results Those four tissues were fat, average muscle, a heartliver-spleen average and an average abdomen (small and large intestines). These four tissue types were then concentrically placed in the former to electrically simulate human tissue. The next step was to compare the electrical properties of the phantom versus a human load. A reflection measurement of the tissue equivalent phantom performed almost identically to the human load, whereas the saline load differed significantly from the human load. This work has shown the usefulness of the tissue equivalent phantom for measurements, testing and empirical analysis of RF interactions with the human body. The measurements of the TEQ heat rise and subsequent SAR

1. Beck, B.L., et al., Tissue-Equivalent Phantoms for High Frequencies. Concepts in Magnetic Resonance Part B: Magnetic Resonance Engineering, 2003. 20B(1): p. 30-33. 2. Durney, C.H. and D.A. Christensen, Basic Introduction to Bioelectromagnetics. 1st ed. 2000, New York: CRC Press LLC. 169. 3. Hartsgrove, G., A. Kraszewski, and A. Surowiec, Simulated Biological Materials for Electromagnetic Radiation Absorption Studies. Bioelectrmagnetics, 1986. 8: p. 29-36.Smith J, Jones M Jr, Houghton L et al. (1999) Future of health insurance. N Engl J Med 965:325–329 4. Foster, K.R. and H.P. Schwan, Dielectric Properties of Tissue - A Review, Handbook of Biological Effects of Electromagnetic Radiation. 1986, Cleveland: CRC Press. 5. Gabriel, S., R.W. Lau, and C. Gabriel, The Dielectric Properties of Biological Tissues: I. Literature Survey. Phys Med Biol 1996. 41: p. 2231–2249. 6. Gabriel, S., R.W. Lau, and C. Gabriel, The Dielectric Properties of Biological Tissues: II. Measurements in the Frequency Range of 10 Hz to 20 GHz. Phys Med Biol 1996. 41: p. 2251–2269. 7. Gabriel, S., R.W. Lau, and C. Gabriel, The Dielectric Properties of Biological Tissues: III. Parametric Models for the Dielectric Spectrum of Tissues. Phys Med Biol 1996. 41: p. 2271–2293. 8. Stuchly, M.A., et al., Dielectric properties of animal tissues in vivo at frequencies 10 MHz - 1 GHz. Bioelectromagnetics, 1981. 2(2): p. 93103. 9. Stuchly, M.A. and S.S. Stuchly, Dielectric Properties of Biological Substances. Journal of Micorwave Power, 1980. 15: p. 19-26. 10. Angelone, L.M., et al., On the effect of resistive EEG electrodes and leads during 7 T MRI: simulation and temperature measurement studies. Magnetic Resonance Imaging, 2006. 24(6): p. 801-812. 11. Ayatollahi, M., et al. Effects of supporting structure on wireless SAR measurement. in Antennas and Propagation Society International Symposium, 2008. AP-S 2008. IEEE. 2008. 12. Gallo, M., P.S. Hall, and M. Bozzetti. Use of Animation Software in the Simulation of On-Body Communication Channels. in Antennas and Propagation Conference, 2007. LAPC 2007. Loughborough. 2007. 13. Jayawardene, M., et al. Comparative study of numerical simulation packages for analysing miniature dielectric-loaded bifilar antennas for mobile communication. in Antennas and Propagation, 2001. Eleventh International Conference on (IEE Conf. Publ. No. 480). 2001. 14. Tarvas, S. and A. Isohatala. An internal dual-band mobile phone antenna. in Antennas and Propagation Society International Symposium, 2000. IEEE. 2000. 15. Yu, H., et al., Printed capsule antenna for medication compliance monitoring. Electronics Letters, 2007. 43(22).

Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

David M. Peterson University of Florida HSC 100015 Gainesville, FL 32610 USA [emailprotected]

Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial Tiffany Omokanwaye1, Donae Owens2, and Otto Wilson Jr.1 1

Catholic University of America/Biomedical Engineering Department, Washington, D.C., USA 2 Benjamin Banneker Academic High School, Washington, D.C., USA

Abstract— Derivatives of the crustacean exoskeleton like chitin have a long history of being used as biomaterials. In the BONE/CRAB lab, the blue claw crab exoskeleton is our biomaterial of choice for a possible bone implant material. The blue claw crustacean, Callinectes sapidus, is found in the Chesapeake Bay. Chitinolytic bacteria, such as those belonging to the Vibrio and Bacillus genera, are common to marine crustaceans. Previous in vitro studies in our lab indicated that bacterial contamination is a major concern. One of the fundamental considerations with the use of an implant biomaterial is sterilization. Materials implanted into the human body must be sterile to avoid subsequent infection or other more serious consequences. An effective sterilization method strikes a balance between the required sterility level and minimum detrimental effect on the properties of the biomaterial while being cost-effective, simple, and readily available. The objective of this study was to isolate, identify bacterial contaminants and develop the best sterilization method for those bacteria found on blue claw crab exoskeleton. Bacteria belonging to the genera Bacillus were identified based on bacterial growth morphologies of dry, dull, raised, rough, and white-grey appearance on LB agar. Bacillus members form endospores which are difficult to eliminate and poses a significant concern for implantable materials. There was no bacterial growth on the TCBS agar plates which is a differential and selective media for Vibrio species. Antimicrobial susceptibility tests were conducted to measure the effectiveness of 70% isopropyl alcohol, povidoneiodine, and household bleach against the bacteria found. The susceptibility tests revealed sensitivities towards the compounds studied. Bacterial identification and susceptibility provide vital guidance to the best method to sterilize while maintaining biological performance. Further studies will evaluate the effect the sterilization protocol has on the physical, chemical, and biological properties of the implant material. Keywords— Biomaterial.

Crustacean,

Microbiology,

Sterilization,

I. INRTRODUCTION Materials designed by humans pale in comparison to those created by nature. Natural materials use free energy, operate under conditions of low temperature (0-40°C), atmospheric pressure, and neutral pH [1]. Bone, one of nature’s masterpieces, is a remarkable, living, mineralized, connective tissue, which is characterized by its hardness, its resilience, and its ability to remodel and repair itself [2]. No single existing

material possesses all the necessary properties required in an ideal bone implant. A suitable bone graft material of proper quality, that is readily available in unlimited quantities, is still needed [3]. Nacre, the source of our inspiration, has been adapted as a bone implant material due to its ability to integrate with bone. This was noted as early as 600 A.D. in the ancient Mayan civilization [4]. One of the goals of this work is to evaluate crab exoskeleton as a potential material to promote bone remodeling. Crab exoskeleton is natural material, similar to bone in composition, structure, and function. Consequently, there exist a body of work that features crab exoskeleton and bone [5, 6,7]. Blue claw crabs are abundant in east coastal bays and water ways [8]. Blue claw crabs are crustaceans whose carapace comprises a mineralized hard component, which is primarily calcium carbonate and a softer, organic component, which is primarily α chitin [7]. Bone also has a mineralized hard component, which is primarily calcium phosphate, and a softer, organic component, which is primarily collagen I. Similar features between bone and crab exoskeleton have been listed in Table 1. These similarities warrant our theory that crab exoskeleton can be used as a bone implant material. Previous in vitro studies in our lab indicated that bacterial contamination is a major concern with our crab exoskeleton samples. One of the fundamental considerations of the use of an implant biomaterial is sterilization. Materials implanted into the human body must be sterile to avoid subsequent infection that can lead to a significant illness or possibly death. Several sterilization methods have been used for implant biomaterials. An effective sterilization method strikes a balance between the required sterility level and minimum detrimental effect on the properties of the biomaterial while being cost-effective, simple, and readily available [9]. Chitin is an abundant polymer within the marine environment, thus chitinolytic bacteria are both common and vital to nutrient recycling. Bacteria belonging to the genera Vibrio, Aeromonas, Pseudomonas, Spirillum, Bacillus, Alteromonas, Flavobacterium, Moraxella, Pasteurella and Photobacterium are all reported as probable agents involved in the bacterial contamination prevalent to marine crustacean like blue claw crab [10]; however, as a starting point, our attention will be focused on the genera Vibrio and Bacillus. The objective of this study was to develop the best sterilization method for bacterial contaminants identified on the blue claw

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 418–421, 2010. www.springerlink.com

Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial

419

Table 1 Similarites between Bone and Crab Exoskeleton

1. 2. 3. 4.

hierarchical structuring at all size levels organic phase (collagen and chitin) liquid crystalline behavior of the organic matrix a highly loaded inorganic phase (Hydroxyapatite or calcium phosphate, and calcium carbonate) that contains 40% Ca2+ by mass; 5. textured, crystallographic orientation of inorganic phase 6. protein and muccopolysaccharde constituents 7. biomineralized composites 8. relatively hard and damage tolerant 9. structural support role, cellular control 10. self-healing behavior 11. piezoelectric properties (ability to convert mechanical stress to electrical signals) 12. the ability to adapt to environmental changes crab exoskeleton. The sterilization agents, 70% isopropyl alcohol, povidone-iodine, and household bleach, were selected on the basis of their availability, simplicity and cost.

II. MATERIALS AND METHODS Materials- Blue claw crabs were purchased from local crabbers. Fluka Thiosulfate Citrate Bile Salts Sucrose Agar (TCBS agar) powder, Luria Bertani or lysogeny broth (LB agar) pre-poured agar plates, Fluka sterile paper disc of 10 mm diameter, and Fluka Oxidase test strips were purchased from Sigma Aldrich. Fisher Scientific Accumet 1000 Series Handheld pH/mV/Ion Meter was used for pH measurements. Povidone-iodine, 70% isopropyl alcohol, and household bleach were purchased from local grocery store. Methods- Blue claw crab exoskeletons were removed, cleaned with deionized water, and allowed to dry overnight. The dried exoskeletons were ground in a coffee grinder and stored in plastic specimen cups for later use. Approximately 0.1g of crab exoskeleton chips were mixed with 50 ml of deionized water (crab broth) and allowed to stand for a day. Bacterial Isolation- One milliliter of crab broth, previously prepared, was spread over individual agar plates. Plates were incubated at 35°C for up to 2-3 days to allow bacterial growth. TCBS and LB agars were used: (1) TCBS agar is the primary plating medium universally used for the selective isolation of Vibrios; and (2) LB agar is the general nutrient medium used for routine cultivation and growth of bacteria that does not preferentially grow one kind of bacteria over another.

Biocehemical Oxidase Test- A member of the genus Vibrio was predicted as the probable bacterial contaminant of crab exoskeleton chips. Biochemical test such as the oxidase test can be used for identifying and differentiating types of bacteria with the enzyme cytochrome oxidase [11]. Vibrio strains are oxidase positive. It must be noted that TCBS is an unsatisfactory medium for oxidase testing of Vibrio species [12]. On the other hand, Bacillus species are oxidase negative [11]. Plastic diagnostic strips with a paper zone were used to wipe off several suspect colonies from the plates. Results were read after 1 minute. A negative result corresponds to no color change at the position of wiped colony and a positive result corresponds to a dark blue or black spot developing at the position of the wiped colony. Antimicrobial Susceptibility Testing- Antimicrobial susceptibility testing is a standard testing method used to measure the effectiveness of agents on pathogenic microorganisms. No zone around disc – N/A

Measure edge to edge across the zone of inhibition over the center of the disk

Fig. 1 Measurement of Zone of Inhibition Representation

IFMBE Proceedings Vol. 32

420

T. Omokanwaye, D. Owens, and O. Wilson Jr.

Table 2 Results for Isolation of Bacteria and Sterilization of Blue Claw Crab Exoskeleton Chips Test LB Agar Growth Patterns

Purpose General nutrient media determine colony morphology

to

Result Shape: rounded Edge: irregular Elevation: raised Color: white-grey, dull Texture: rough, dry

Analysis Genera: Bacillus* *

Spore forming

TCBS Growth Pattern

Differential, selective media for Vibrio colony morphology

No growth

Does not have Vibrio bacteria

Biochemical Oxidase Test

Identifies organisms that produce the enzyme cytochrome oxidase

No color change on strip

Negative for production of the enzyme cytochrome oxidase

Antimicrbial Susceptibilty Zone of Inhibition (ZI)

Measures effectiveness of agents against microorganisms

ZIAlcohol (pH=6)=11.7 ± 0.6mm ZIBleach(pH=12)=14.2 ± 0.8mm ZIIodine (pH=4)=10.8 ± 0.3mm ZIControl=N/A

Bleach is the most effective

Sterile 10 mm paper disks impregnated with povidoneiodine, 70% isopropyl alcohol, and household bleach were placed on a plate inoculated to form a bacterial lawn. Each disc will absorb exactly 50 µL of liquid. A control disc with no chemical agents was also included on the plate. The plates were incubated to allow growth of the bacteria and time for the agent to diffuse into the agar. As the substance moved through the agar, it established a concentration gradient. If the organism is susceptible to the agent, a clear zone will appear around the disk where growth has been inhibited. The size of the zone of inhibition (ZI) depends upon the sensitivity of the bacteria to the specific antimicrobial agent [11]. The sterile disks impregnated with the three different chemical agents described earlier and a control with no chemical agents, were placed approximately the same distance from the edge of the plate and from each other while ensuring the disks were in complete contact with the surface of the agar. The zones of inhibition (ZI) were measured as shown in Fig. 1. The ZI tests were conducted three times and the average and standard deviations were calculated. The traditional methodology for routine detection of pathogens nearly always employs a combination of different media in order to increase the sensitivity and the specificity of the detection and identification method. Quantitative growth, the ability of the medium to produce distinctive biochemical reactions [13], and the zones of inhibition were evaluated. Table 2 lists the results for the isolation of bacteria and sterilization of blue claw crab exoskeleton chips.

III. RESULTS AND DISCUSSION Typical Vibrio species morphology is small or large yellow colonies due to sucrose fermentation on TCBS agar plates [11]. No bacterial growth was observed on TCBS agar

plates. However, organisms were isolated on the LB agar plates. Based on growth morphologies which displayed dry, dull, raised, rough, and white-grey appearance on LB agar, bacteria were identified as belonging to the genera Bacillus. Bacillus is an endospore forming bacterium commonly found in soil and aquatic habitats. Among 70 tested Bacillus spp. strains, 19 were found to possess chitinolytic activity [14]. To identify the species within the genera more differential testing will be required. An endospore is a dormant form of the bacterium that allows it to survive poor environmental conditions. Spores are resistant to heat and most chemicals because of a tough outer covering made of the protein keratin [11]. Because bacterial spores are relatively difficult to kill it is usually assumed that a process which kills all spores present also kills all other microbial forms present - that is, it sterilizes the material. Most liquid chemical disinfectants, however, have little or no sporicidal action. Concentrated hypochlorite solutions (bleach) are sporicidal at room temperature but unfortunately are very corrosive. Mixtures of alcohol and bleach have been shown to be highly sporicidal. The alcohol, alone, has no sporicidal activity; alcohol may ‘soften’ the spore coat facilitating penetration by hypochlorite reaction product [15]. Oxidase tests performed on bacterial colonies formed on the LB agar plates were negative for the production of the enzyme cytochrome oxidase because there was no color change at the position of wiped colony. The zones of inhibition (ZI) were measured as follows: ZIControl= N/A, ZIAlcohol (pH=6)= 11.7 ± 0.6 mm, ZIBleach (pH=12)= 14.2 ± 0.8 mm, and ZIIodine (pH=4)= 10.8 ± 0.3 mm. The susceptibility tests revealed sensitivities towards the agents studied with household bleach showing the most susceptibility. Further tests are required to determine the minimum inhibitory concentration for each chemical agent as well as changes to the existing properties of the crab chips.

IFMBE Proceedings Vol. 32

Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial

The pH values for povidone-iodine, 70% isopropyl alcohol, and household bleach were 4, 6, and 12, respectively. There appears to be direct relationship between pH and antimicrobial susceptibility; as the pH increases and becomes more basic, the ZI also increases.

IV. CONCLUSION One of the main research questions in the BONE/CRAB Lab involves the evaluation of crab exoskeleton as a material for bone inspired implants. Making implants safe and/or sterile for use in the body is a daunting task. Based on bacterial growth morphologies of dry, dull, raised, rough, and whitegrey appearance on LB agar, bacteria belonging to the genera Bacillus were identified. Bacillus members form endospores which are difficult to eliminate. Endospores pose a significant concern for implantable materials since the human body can create harsh environmental conditions. There was no bacterial growth on the TCBS agar plates which is a differential and selective media for Vibrio species. Antimicrobial susceptibility tests revealed sensitivities of 70% isopropyl alcohol, povidoneiodine, and household bleach against the bacteria found on crab exoskeleton. Bleach had the greatest sensitivity with a ZI measurement of 14.2 ± 0.8 mm. Sterilization is associated with the total absence of viable microorganisms, which refers to an absolute condition and assures the greatest safety margin than any other antimicrobial method. Finding an effective agent against spores requires a thorough understanding of the unique characteristics of each chemical agent, including their limitations and appropriate applications [16]. Apparent lack of an ideal liquid chemical sterilant and results from the zones of inhibition study establishes our need to test different concentrations of bleach and mixtures with alcohols to reach an optimum level of sterility without sacrificing properties such as bioactivity.

421

REFERENCES 1. Smith C A, Wood E J (1991) Molecular and Cell Biochemistry: Biological Molecules. Chapman & Hall, London 2. Hing K A. (2004) Bone repair in the twenty-first century: biology, chemistry or engineering? Philos Trans R Soc Lond A, 2821–2850 3. Wise D L et al. (2002) Biomaterials Engineering and Devices: Human Applications. Humana Press, Totowa 4. Ratner B D (2001) Replacing and Renewing: Synthetic Materials, Biomimetics, and Tissue Engineering in Implant Dentistry. J Dent Educ 65:1340-1347 5. Bouligand Y (1972) Twisted Fibrous Arrangements in Biological Materials and Cholesteric Mesophases. Tissue Cell 4:189-217 6. Giraud-Guille M-M, Belamie E, Mosser G (2004) Organic and mineral networks in carapaces, bones, and biomimetic materials. Comptes Rendus Palevol 3:503-513 7. Meyers M A et al. (2006) Structural Biological Composites: An Overview. JOM 58:35-41 8. Perry H (2001) Unit Five Coast/Blue Crabs. Project Oceanography. [Online]. http://www.marine.usf.edu/pjocean/packets/f01/f01u5p2.pdf. 9. Morejon-Alonso L et al. (2007) Effect of Sterilization on the Properties of CDHA-OCP-B-TCP Biomaterial. Material Research 10:15-20 10. Vogan C L, Costa-Ramos C, Rowley A F (2002) Shell Disease Syndrome in the Edible Crab, Cancer Pagurus -- Isolation, Characterization and Pathogenicity of Chitinolytic Bacteria. Microbiology 148:743-754 11. Leboffe M J, Pierce B E (2005) A Photographic Atlas for the Microbiology Laboratory, 3rd edition. Morton Publishing Company, Englewood 12. Morris G K et al. (1979) Comparison of Four Plating Media for Isolating Vibrio. J Clin Microbiol 9:79-83 13. Blom M et al. (1999) Evaluation of Statens Serum Institut Enteric Medium for detection of Enteric Pathogens. J Clin Microbiol 37:23122316 14. Aktuganov G E et al. (2003) The Chitinolytic Activity of Bacillus Cohn Bacteria Antagonistic to Phytopathogenic Fungi. Microbiology 72:56–360 15. Coates D, Death J E (1978) Sporicidal activity of mixtures of alcohol and hypochlorite. J Clin Pathol 31:148-152 16. Mazzola P G, Penna T C V, da S Martins, A M (2003) Determination of decimal reduction time (D value) of chemical agents used in hospitals for disinfection purposes. BMC Infect Dis 3

The corresponding author:

ACKNOWLEDGMENT The authors would like to acknowledge support from the NSF Biomaterials Program (grant number DMR-0645675).

Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Otto Wilson, Jr., PhD Catholic University of America 620 Michigan Ave., NE Washington, DC USA [emailprotected]

Neural Stem Cell Differentiation in 2D and 3D Microenvironments A.S. Ribeiro1, E.M. Powell2, and J.B. Leach1 1

University of Maryland Baltimore County/Chemical & Biochemical Engineering, Baltimore, USA 2 University of Maryland School of Medicine/Anatomy and Neurobiology, Baltimore, USA

Abstract— Neural Stem Cells (NSCs) have tremendous potential for tissue engineering applications because of their high regenerative capacity to promote functional recovery following disease and injury in the central nervous system. Despite their great potential, current methods to culture NSCs are limited; e.g., adherent 2D cultures are greatly simplified vs. the in vivo microenvironment by imposing altered tissue-specific architecture and mechanical and biochemical cues, and cell morphology. Environmental cues are critical for cellular maturation and function and in vivo these are presented in a 3D environment. Recent studies with non-neuronal cells demonstrate that in a 3D matrix, cells dramatically alter their morphology and signaling pathways, with in vitro 3D environments being a better representation of in vivo systems. The main goal of this study is to define how NSC differentiation and cell-matrix signaling is altered in 2D and 3D systems. We hypothesize that 3D culture imposes changes in matrix-ligand organization and alters NSC behavior by modulating cytoskeletal signaling and differentiation outcome. To test our hypothesis we cultured mouse embryonic NSCs in 2D and 3D biomaterials and observed differences in cell behavior and β1 - integrin signaling with altered culture dimensionality using immunocytochemistry and flow cytometry. In this study we show that NSCs sense the dimensionality of their environment and alter motility: in 3D, individual cells adapt a random migration pattern and extend longer neurites than in 2D where the cells undergo chain migration. In addition, the differentiation of the NSCs into the neuronal phenotype is increased in 2D vs 3D culture. These results confirm our hypothesis and provide a foundation to design optimal biomaterials towards the development of therapeutics for nerve repair and neurodegenerative disorders. Keywords— Neural Stem Cells, 3D culture, differentiation, β1 - integrin signaling.

I. INTRODUCTION The regeneration of damaged nervous tissue is a complex biological problem. Peripheral nerve injuries can heal on their own if the injury is small, but factors exist within the central nervous system (CNS) that pose barriers to regeneration [1]. Functional recovery following brain and spinal cord injuries and neurodegenerative diseases is likely to require the transplantation of exogenous neural cells and tissues, and neural stem cells (NSC) transplants have shown a great potential to promote functional recovery [2-4]. Though promising, the success of neurotransplantation is

currently limited by short term survival of NSCs and failure to integrate with the host tissue [5,6]. To overcome these challenges, tissue engineers have successfully combined neural stem cells and polymer scaffolds to generate functional neural and glial constructs that emulate the mammalian brain or spinal cord structure and can therefore be used as tissue replacements for CNS injuries [2,3,7]. Although some success has been noted in the use of biomaterial implants [8-10], most investigations of biomaterials for NSCs applications have been implemented in vitro and the few transplant studies carried out did not show improvement beyond the level of success reported for NSC transplants alone. We believe that tissue engineering efforts focused on nerve repair and brain injury have been limited by a poor understanding of how the NSCs interact with threedimensional (3D) cues. Cells cultured in engineered 3D microenvironments have been shown to better represent in vivo cellular behavior than cells cultured in 2D configurations [11,12]. For example, cells cultured in 3D scaffolds have been found to exhibit more in vivo like viability, proliferation, response to biochemical stimuli, gene expression and differentiation [13,14]. One of the fundamental differences between 2D vs 3D culture is the distribution of cell-cell and cellextracellular matrix interactions, which alter cell morphology, signaling mechanisms and subsequent cell function [11,15,16]. The types of cell-matrix adhesions organized by integrins in vitro and the signals they transduce have been shown to be strongly affected by the flat, rigid surfaces of tissue culture dishes [11]. Therefore, a closer approximation to in vivo environments should be attained by growing cells in 3D matrices [16]. Given these findings, there have been a number of studies investigating the interactions between NSCs and 3D biomaterials. In general, work in this area has focused on the effect of the biomaterial microenvironment on NSC viability [21,22] and differentiation [21,23] without exploring how substrate dimensionality directly impacts matrixcytoskeletal interactions and how it imposes indirect effects on NSC fate. The purpose of this study is to define the molecular mechanisms of how neural stem cells interact with their 3D environment, by determining the effect of environment dimensionality in NSC differentiation and cytoskeletal

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 422–425, 2010. www.springerlink.com

Neural Stem Cell Differentiation in 2D and 3D Microenvironments

423

signaling. We cultured NSCs in 2D and 3D collagen matrices and examined differentiation and β1-integrin expression in both presentations of the same substrate. We found that NSCs adapt various differentiation and migration patterns in 2D vs 3D culture and conserve β1–integrin signaling expression.

incubations in blocking solution on a rotating tray. All procedures were carried out at 25 °C. Immunoreactive cells in 2D and 3D samples were imaged with confocal microscopy (Leica, TCS SP5). The 3D gels were imaged using 63x long working distance objectives (WD = 250 µm; Leica). Images of ≥3 samples of ≥3 different cultures were analyzed for each experimental condition (n≥9). We determined differences in the presence (reactive vs absent), location (whole cell, cell body, processes) and type of signal (diffuse vs punctate).

II. MATERIALS AND METHODS A. Isolation and Culture of NSCs in 2D and 3D Collagen Matrices NSCs derived from the cerebral dorsal telencephalon of E13.5-14.5 C57Bl/6 mice (Jackson Laboratory) [19,24]. The dissected tissue was mechanically minced to a single cell suspension, suspended in proliferation medium (serumfree DMEM/F12 media supplemented with B27 (Gibco), human recombinant FGF-2 (20 ng/ml, Peprotech) and human recombinant EGF (20 ng/ml, Peprotech)) and then seeded at 2x105 cells/ml in culture flasks. These conditions promoted the formation of neurospheres from floating cultures of single cells [25]. NSCs were split weekly and the medium refreshed every 2-3 d. Viable NSCs (p 2-6) were seeded in differentiation medium (proliferation medium without growth factors) onto 2D collagen-coated coverslips (~7 µg/cm2) at 20-25 neurospheres/cm2 and in 3D 1mg/ml collagen gels 200-300 neurospheres/ml (10-15 neurospheres per 50 µl gel). These densities were optimized to reduce contact between neighboring neurospheres. NSCs were cultured for 3 d. The medium was changed after the first day of culture. Collagen coated coverslips and 1 mg/ml gels were prepared using rat tail type I collagen (BD Biosciences) according to the manufacturer’s directions. The cells were suspended in collagen solution prior to gelling, mixed and then 20 µl of collagen solution was transferred to an uncoated glass coverslip and allowed to gel at physiological conditions for ~30 min and then covered with 450 µL of medium. B. Immunocytochemistry and Confocal Imaging After fixation in a buffered 4% formalin solution for 20 min, the samples were blocked in 10% lamb serum in PBS for 30 min (in 2D) and 2 h (in 3D). For 2D culture, the cells were incubated in primary antibodies against β1–integrin or phenotypic markers for neurons and astrocytes (Table 1), for 30-60 min and visualized following 30 min incubation with the appropriate fluorescently-conjugated secondary antibodies (Jackson Immunoresearch). Immunocytochemistry procedures for the 3D gels incorporated several long washing steps (30 min each) and overnight antibody

Table 1 Antibodies used for integrin and stem cell differentiation analysis Name Rabbit anti-β1 integrin

Company Santa Cruz Biotechnology

Mouse IgG2a anti-βIIItubulin (TUJ1) Mouse IgG1 anti-GFAP

Dilution [1:100]

Sigma

[1:500]

Sigma

[1:200]

C. Flow Cytometry Analysis Protein expression was quantified using flow cytometry. Cells in collagen cultures were collected in 2 mg/ml collagenase (Fisher) following trituration to generate single-cell suspensions. Cells were fixed, permeabilized and then incubated with primary antibody in PBS containing 10% fetal bovine serum for a minimum of 30 min at 25 °C with constant agitation. Cells were washed twice with buffer, incubated with the appropriate secondary antibody for a minimum of 30 min at 25 °C and then washed and re-suspended in PBS prior to analysis. Three populations including the positive, negative (secondary only) and unlabelled cells for each antibody were analyzed. Three-color live-gating acquisition was carried out on a Beckman-Coulter Cyan ADP flow cytometer (Cytomation).

III. RESULTS A. NSC Differentiation in 2D and 3D Collagen Cultures Cell migration could be observed after day 1 in both culture conditions. After 3 d differences were apparent in the patterns of cell migration away from the neurospheres (Fig. 1). In 2D the migrating cells formed chains of cells extending from the spheres, as seen in previous studies [20], where the cells migrate in contact with one another rather than as random cells adhered to a substrate. In 3D, however, the cells migrated away from the spheres with minimal cell-cell contact; cells further from the neurosphere extended large processes that spanned into the gel matrix (Figs. 1, 2) instead of contacting with other cells as we observed in 2D.

IFMBE Proceedings Vol. 32

424

A.S. Ribeiro, E.M. Powell, and J.B. Leach

processes, while in 3D β1-integrin reactivity was more diffuse with complexes of higher intensity around the cell bodies (Fig. 4). Ctrl 3D

2D

3D Ctrl 2D

ȕ1-integrin+ 3D ȕ1-integrin+ 2D

Ctrl 3D

Ctrl 2D

TUJ1+ 3D TUJ1+ 2D

Fig. 1 Phase contrast images of NSCs cultured for 3 d in 2D and 3D collagen matrices. Yellow arrows note processes that extend into the matrix. Scale bars, 50 µm ȕ1 – integrin -

In order to determine the phenotype of the migrating cells, we stained the cultures with antibodies against βIIItubulin, a neuronal marker expressed very early after commitment to the neuronal lineage, and GFAP, a glial cell marker expressed in astrocytes. We observed GFAP+ cells near the center of the neurospheres and βIII-tubulin+ cells migrating towards the neurosphere periphery; the latter effect was more pronounced in 3D culture. To determine differences in NSC differentiation between 2D and 3D culture we quantified the expression of βIII-tubulin+ and GFAP+ cells for each condition using flow cytometry. Preliminary results suggest that in 2D culture there is an increase in the expression of βIII-tubulin+ cells (Fig. 3). The level of GFAP+ cells remain unchanged between 2D and 3D culture, revealing no differences in the cells that differentiated into astrocytes (data not shown). 2D

Fig. 3 Flow cytometry analysis of differentiated NSCs cultured for 3 d in 2D and 3D collagen matrices. Differentiated NSCs were labeled with antibodies against β1-integrin (left), βIII-tubulin (TUJ1, right) or the secondary antibodies only as the isotope controls

Fig. 4 NSC β1–integrin immunoreactivity (red) in 2D and 3D culture. Cell nuclei is labeled with DAPI (blue). Inset depict magnified features of representative cells. Arrows note the location of the neurosphere (out of field of view.) Scale bars, 20 µm

3D

IV. DISCUSSION

Fig. 2 Neurosphere immunoreactivity for βIII-tubulin and GFAP after 3 d of culture in 2D and 3D collagen matrices. In 2D neurons are labeled in red and astrocytes in green. In 3D, neurons are labeled in green and astrocytes in red. Scale bars, 50 µm

B. β1-Integrin Expression in 2D and 3D Collagen Culture The expression levels of β1-integrin in 2D and 3D culture are similar (Fig 3). Immunocytochemistry shows that the cells in both culture conditions expressed β1-integrin throughout the cells when cultured under both conditions. However, in 2D β1-integrin was expressed in a clustered pattern with several large punctate complexes in the cells’

This study focuses on determining how environment dimensionality affects neural stem cell outcome. The experiments reported here show that 3D culture impacts NSC differentiation and migration events. In 3D, instead of the characteristic chain migration observed in 2D and seen in previous studies [20], the cells migrate away of the neurospheres in a manner that seems more independent of cell-cell interactions. Moreover, in 3D, isolated neurons migrated further away from the spheres, into the gel matrix, and extended longer neurites than in 2D culture. Based on these findings, we hypothesize that cell-matrix interactions during cell migration in 3D culture play a more important role than cell-cell signals. We also note that in 2D there were a greater percentage of differentiated neurons indicating that flat 2D culture may induce neuronal differentiation in comparison to the soft 3D gels used in these experiments. Flow cytometry

IFMBE Proceedings Vol. 32

Neural Stem Cell Differentiation in 2D and 3D Microenvironments

425

analysis verified that total β1-integrin expression was unaffected with culture dimensionality, which agrees with previous findings with non-neuronal cells [12,16]. However differences in β1–integrin staining patterns were evident within the cells, as seen in the dissimilar arrangement of integrinmediated adhesion sites in 2D vs 3D.

6. Lepore, A.C. et al. (2006) Long-term fate of neural precursor cells following transplantation into developing and adult CNS. Neuroscience 139, 513-30. 7. Ma, W. et al. (2008) Reconstruction of functional cortical-like tissues from neural stem and progenitor cells. Tissue Eng Part A 14, 1673-86. 8. Park, K.I., Teng, Y.D. & Snyder, E.Y. (2002) The injured brain interacts reciprocally with neural stem cells supported by scaffolds to reconstitute lost tissue. Nat Biotechnol 20, 1111-7. 9. Freudenberg, U. et al. (2009) A star-PEG-heparin hydrogel platform to aid cell replacement therapies for neurodegenerative diseases. Biomaterials 30, 5049-60. 10. Bible, E. et al. (2009) The support of neural stem cells transplanted into stroke-induced brain cavities by PLGA particles. Biomaterials 30, 2985-94. 11. Cukierman, E., Pankov, R., Stevens, D.R. & Yamada, K.M. (2001) Taking cell-matrix adhesions to the third dimension. Science 294, 1708-12. 12. Paszek, M.J. et al. (2005) Tensional homeostasis and the malignant phenotype. Cancer Cell 8, 241-54. 13. Hoffman, R.M. (1993) To do tissue culture in two or three dimensions? That is the question. Stem Cells 11, 105-111. 14. Cullen, D.K., Lessing, M.C. & LaPlaca, M.C. (2007) Collagendependent neurite outgrowth and response to dynamic deformation in three-dimensional neuronal cultures. Ann Biomed Eng 35, 835-46. 15. Yamada, K.M., Pankov, R. & Cukierman, E. (2003) Dimensions and dynamics in integrin function. Braz J Med Biol Res 36, 959-66. 16. Cukierman, E., Pankov, R. & Yamada, K.M. (2002) Cell interactions with three-dimensional matrices. Curr Opin Cell Biol 14, 633-9. 17. Wozniak, M.A., Modzelewska, K., Kwong, L. & Keely, P.J. (2004) Focal adhesion regulation of cell behavior. Biochim Biophys Acta 1692, 103-19. 18. Geiger, B. (2001) Cell biology. Encounters in space. Science 294, 1661-3. 19. Lathia, J.D. et al. (2007) Patterns of laminins and integrins in the embryonic ventricular zone of the CNS. J Comp Neurol 505, 630-43. 20. Jacques, T.S. et al. (1998) Neural precursor cell chain migration and division are regulated through different beta1 integrins. Development 125, 3167-77. 21. O'Connor, S.M. et al. (2000) Primary neural precursor cell expansion, differentiation and cytosolic Ca(2+) response in three-dimensional collagen gel. J Neurosci Methods 102, 187-95. 22. Watanabe, K., Nakamura, M., Okano, H. & Toyama, Y. (2007) Establishment of three-dimensional culture of neural stem/progenitor cells in collagen Type-1 Gel. Restor Neurol Neurosci 25, 109-17. 23. Levenberg, S. et al. (2003) Differentiation of human embryonic stem cells on three-dimensional polymer scaffolds. Proc Natl Acad Sci U S A 100, 12741-6. 24. Reynolds, B.A. & Weiss, S. (1992) Generation of neurons and astrocytes from isolated cells of the adult mammalian central nervous system. Science 255, 1707-10. 25. Bez, A. et al. (2003) Neurosphere and neurosphere-forming cells: morphological and ultrastructural characterization. Brain Res 993, 1829.

V. CONCLUSIONS The goal of this study was to demonstrate that 3D culture modulates NSCs integrin signaling events and alters NSC outcome. Our work to date has demonstrated that NSC migration and differentiation are altered with culture dimensionality: in 2D there is an increase in neuronal population and cells undergo chain migration, whereas in 3D, differentiated cells adapt a random migration pattern and extend longer neurites. Ongoing work focuses on the confirmation of these results via in depth study of β1–integrin signaling pathways to determine how the individual differentiated cellular populations adjust these regulatory events to changes in culture dimensionality. Further studies in NSC biology combined with improved engineered cell scaffolds will certainly present rewards in the near future, including the development of new therapies for several types of neurological disorders.

ACKNOWLEDGMENT We thank C. Petty for confocal microscopy training and technical assistance on the Leica SP5 (funded by NSF DBI0722569); Dr J. Lathia for training in NSCs dissection and isolation and Dr S. Rosenberg for training and technical assistance with FACS analysis. This work was supported by NIH-NINDS R01NS065205 (JBL), the Henry Luce Foundation (JBL), and AR was supported by the Wyeth Fellowship at UMBC.

REFERENCES 1. Schmidt, C.E. & Leach, J.B. (2003) Neural tissue engineering: strategies for repair and regeneration. Annu Rev Biomed Eng 5, 293-347. 2. Ma, W. et al. (2004) CNS stem and progenitor cell differentiation into functional neuronal circuits in three-dimensional collagen gels. Exp Neurol 190, 276-88. 3. Martinez-Ramos, C. et al. (2008) Differentiation of postnatal neural stem cells into glia and functional neurons on laminin-coated polymeric substrates. Tissue Eng Part A 14, 1365-75. 4. Svendsen, C.N. et al. (1997) Long-term survival of human central nervous system progenitor cells transplanted into a rat model of Parkinson's disease. Exp Neurol 148, 135-46. 5. Kulbatski, I., Mothe, A.J., Nomura, H. & Tator, C.H. (2005) Endogenous and exogenous CNS derived stem/progenitor cell approaches for neurotrauma. Curr Drug Targets 6, 111-26.

Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Jennie B. Leach University of Maryland Baltimore County 1000 Hiltop Circle Baltimore, MD 21250 USA [emailprotected]

A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms M.T. Meyer1,2, V. Roy1,3, W.E. Bentley1, and R. Ghodssi1,2,4 1

Fischell Department of Bioengineering, 2 Institute for Systems Research, 3 Department of Molecular and Cell Biology, Department of Electrical and Computer Engineering, University of Maryland College Park, College Park MD, USA

4

Abstract— Bacterial biofilms are pathogenic matrices which characterize a large number of infections in humans and are often formed through bacterial intercellular molecular signaling. A microfluidic platform for the evaluation of bacterial biofilms based on optical density was fabricated and tested. The platform was used to non-invasively observe the formation of Escherichia coli biofilms. These methods were corroborated by measurement of biofilm optical thickness. The dependence of biofilm optical density on bacterial communication was evaluated. After 60 hours of growth at 10 µL/hr, wild-type biofilms were approximately 100% more thick and 160% more optically dense than biofilms formed by non-communicating bacteria. The thicknesses of the detected biofilms are comparable to those found in literature for both in vitro and in vivo biofilms seen in microbial infections. The platform was also used to observe the effect of flow parameters on biofilm adhesion; results indicate bacterial communication in biofilm formation is necessary for adherent biofilms. The presented platform will be used in characterization of biofilm formation and response in drug discovery applications. Keywords— microfluidics, bacterial biofilms, biofilm optical density, bioMEMS.

I. INTRODUCTION Bacterial biofilms have been linked to a type of intercellular molecular communication known as quorum sensing. Once an infection reaches a threshold population, molecular signals dictate a change in phenotype resulting in the formation of a pathogenic biofilm comprised mainly of bacteria and an extracellular polysaccharide matrix. Biofilms are of particular interest since they are involved in 65% of bacterial infections in humans [1]. Bacterial biofilms are particularly difficult to treat due to elevated resistance to antibiotics [2]. The prevalence and resistance of bacterial biofilms underscores the need to understand the mechanisms of biofilm formation and development toward the goal of treating and preventing bacterial biofilms. Bacterial biofilms have been recently investigated in microfluidic environments, which allow for control of the microenvironment of the biofilm. Microfluidic devices are therefore well suited for the study of biofilm growth. Janakiraman et al [3] evaluated the thickness and morphology of biofilms formed under varying conditions

within a microfluidic channel. However, these studies focus on endpoint measurements using external equipment for evaluation of the biofilm. In recent years, microfluidic systems integrated with microsensors have risen as a promising platform for drug development. These platforms take advantage of microfabrication techniques to batch-fabricate devices that not only are inexpensive and small, but also can serve as a platform for the integration of biological elements with microsensors. There exist many varieties of microdevices targeted toward bacterial detection. Capacitive sensors have been applied toward real-time sensing of cell sedimentation and adhesion [4]. Richter et al have developed sensors for fungal biofilm detection using impedimetric sensing [5]. In contrast to these devices that probe the electrical properties of cells, Bakke et al [6] have presented work at the macroscale using optical density as a non-invasive, label-free means of evaluating biofilm growth. The authors demonstrated that the optical absorbance of a biofilm in the visible spectrum will increase with biofilm growth. In the presented work, we have designed and constructed a microfluidic platform for real-time, non-invasive monitoring of Escherichia coli biofilms as a function of their optical density. Biofilm optical density is compared with the measured optical thickness to verify the applicability of this method. The platform is also used to investigate the role of quorum sensing in the formation of bacterial biofilms by comparing the biofilm optical density and thickness trends of wild-type E. coli to those of E. coli incapable of quorum sensing molecule production.

II. MATERIALS AND METHODS A. Microfluidic Platform Design and Fabrication The platform consists of a micropatterned base and a microfluidic channel. The base is fabricated on Pyrex, providing a transparent substrate; 20 nm Cr and 200 nm Au are sputtered onto the Pyrex, and patterned using contact photolithography to define two observation windows per microfluidic channel. Micropatterned windows allow for repeatable measurement positions within the channel. The chips are covered with a 1 µm layer of LPCVD-deposited

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 426–429, 2010. www.springerlink.com

A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms

427

C. Platform Operation

Fig. 1 a) Schematic of bacterial deposition b) Cross-section of microfluidic platform

SiO2 to promote adhesion of the microfluidic layer. The microfluidic channel is molded in polydimethylsiloxane (PDMS); the mold is patterned in 100 µm-thick SU-8 50 (MicroChem Corp, USA) using contact photolithography. PDMS (Sylgard 184, Dow Corning, USA), in a 10:1 ratio of resin to curing agent, is poured over the mold and cured at 80 ºC for 20 min. Ports for interfacing the channel to fluidic tubing are drilled into the PDMS layer using a 2 mm dermatological punch. The molded PDMS is reversibly bonded to the chip, allowing for disassembly, cleaning, and reuse of each chip after experimentation. Methanol is applied to the PDMS layer, which is aligned to and placed on the chip. Evaporation of the methanol produces a reversible bond between the PDMS and the top layer of silicon dioxide on the chip. Schematics of the microfluidic platform are given in Figure 1. Assembled platforms are each affixed to a glass slide, then aligned to and positioned over two photodiodes (BS520, Sharp Microelectronics, USA) per microfluidic channel. One end of Tygon tubing is connected to a PDMS port via a barbed tubing coupler (McMaster Carr 5117K41, USA), and the other is connected to a syringe pump operating in suction mode. After fluidic assembly, an array of red, high-intensity LEDs is aligned to and positioned over the platforms. A data acquisition card interfaced to LabVIEW (National Instruments, USA) is used to monitor the outputs of the sensing photodiodes and the LED array. The entire assembly is positioned in an incubator maintained at 37 ºC. B. Strains Used Wild-type E. coli W3110 was selected as a standard for biofilm formation in the microfluidic platform. In investigating the role of quorum sensing in optically detectable biofilm formation, MDAI2, a luxS-null mutant of E. coli W3110 [7] was used as a negative control. All suspension and biofilm cultures were grown in LB media.

The device is prepared for experimentation by first depositing the bacteria of interest in the microfluidic channel. Bacteria are grown in suspension at 37 ºC to an OD600 of 0.25, then suctioned into the assembled platform. The channel is incubated with the inoculum for 2 hours to allow for adhesion to the substrate. The channel is rinsed for 15 min with LB growth media at a flow rate of 10 µL/hr, corresponding to an average velocity of 0.06 mm/s within the channel. The platform is then continuously operated with LB media at 10 µL/hr, and changes in optical signals are monitored using the LabVIEW interface. The change in photodiode voltage over the growth period is evaluated with respect to a baseline voltage measured over 15 min after rinsing. The change in voltage, corresponding to a change in transmitted light intensity, is converted to a change in optical absorbance for evaluation of results. While the data from the two measurement windows in the channel are not identical, results are similar; the optical absorbance data reported is the average of data from both measurement locations within a channel. In assessing the thickness of the biofilm, the platform is removed at selected timepoints. Optical thickness is evaluated using an optical microscope and measuring the distance between the focal plane of the channel bottom and the focal plane of the top of any accumulated biomass. Thickness is measured at 5 locations around each observation window and averaged.

III. RESULTS The presented method for evaluating bacterial biofilms based on optical density was used to observe several phenomena within a microfluidic flow cell. A. Parallel Operation of Devices The design of the setup, including inexpensive microfluidic platforms, sensors, and light sources, make this method easily arrayed. This capability is demonstrated by the operation of two platforms simultaneously. In the example shown in Figure 2, one microfluidic channel was inoculated with E. coli W3110, while the other was not inoculated with any bacteria; both channels were subsequently exposed to a continuous flow of LB growth media at 10 µL/hr. The lack of optically detectable microbial growth in the latter channel indicates the degree of sealing and sterility achieved in the setup. The capability of parallel operation allows for multiple experiments to have identical environmental parameters that may otherwise vary between experiments.

IFMBE Proceedings Vol. 32

428

M.T. Meyer et al.

Fig. 2 Change in optical density concurrently observed in a channel containing an E. coli W3110 biofilm and in a channel filled with LB media

Fig. 3 Change in optical density concurrently observed in two channels containing E. coli W3110 and MDAI2 respectively

B. Evaluation of Biofilm Growth The platform was utilized to examine the role of quorum sensing in biofilm formation. Wild-type E. coli W3110 was grown in parallel with MDAI2 continuously in the microfluidic platform for 60 hours, yielding the optical density curves shown in Figure 3. This was compared to the progression of optical thickness of adherent films over the same period of time (Figure 4). As depicted, both strains of bacteria produced a measurable change in optical density, and biofilm formation was observed in both channels. Both the optical density and optical thickness data follow the same general trends over the time scale investigated, indicating that quantification via optical density is an appropriate method for evaluating bacterial biofilms. For approximately the first 15 hours of operation, W3110 and MDAI2 exhibit similar trends in increasing optical density; after this point, the optical density of the W3110 biofilm rises above that of the MDAI2 biofilm. Over a 60 hour growth period, bacterial formations within the MDAI2 channel were also thinner than W3110 biofilms. The difference between the two types of biofilms investigated can be highlighted by observing the biofilms’ response to increased shear stress; biofilm stability may be evaluated by the degree of adhesion to the substrate. Biofilms previously formed at 10 µL/hr over 48 hours were rinsed with LB media at a flow rate of 400 µL/hr for 30 minutes, at which time the initial flow rate was restored. The change in optical density for MDAI2 and W3110 biofilms with rinsing is shown in Figure 5. After rinsing, the average optical thickness of the W3110 film was 56 ± 8 µm, and that of the MDAI2 film was 11 ± 3 µm. Over the rinsing period, the change in optical density for the W3110

Fig. 4 Optical thickness measurements of E. coli W3110 and MDAI2 biofilms

10 uL/hr

400 uL/hr

10 uL/hr

Fig.

5 Rinsing of E. coli W3110 and MDAI2 biofilms measured by the change in optical density

film was minimal, corresponding to the small change in thickness compared with the value in Figure 4. Conversely,

IFMBE Proceedings Vol. 32

A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms

the MDAI2 film exhibited large decreases in thickness and in optical density, indicating weaker adhesion.

IV. DISCUSSION The methods presented provide the unique capability of continuous, real-time monitoring of biofilm optical density within the microfluidic channels. This is evident in the optical signals’ frequent fluctuations, which are attributed to sloughing and re-deposition of clumps of bacteria in the continuous flow environment. Both W3110 and MDAI2 exhibit optically detectable biofilms. The results suggest that although biofilms are observed in the absence of quorum sensing activity, their structure and growth dynamics differ from those formed with quorum sensing. MDAI2 biofilms were found to be thinner, more sensitive to rinsing, and generally less optically dense than W3110 biofilms. These results agree with other experiments finding that while biofilm formation is promoted by quorum sensing, quorum sensing-inhibited bacteria are also capable of forming thin, dense biofilms [8]. The increasing optical density and thickness of the wildtype biofilm at the end of the longest experiment (60 hours) suggest ongoing maturation. Considering this, the time limitation imposed in this study appears too short for achievement of a fully matured biofilm. However, when considering the application of this platform toward drug development, it is most important to monitor the beginning stages of biofilm growth. Assuming growth dynamics after 60 hours follow the same trend of slowly approaching steady state, the time scale used is suitable for investigation of the initial formation and development of E. coli biofilms. The optical changes detected correspond to biofilm thicknesses on the order of tens of microns; while this is comparable to in vitro biofilm thickness values found in literature [3,6], Candida albicans biofilms formed on catheters in vivo have been observed to be as thick as 100 µm [9]. Therefore, the platform may be used to observe both scientifically and clinically relevant biofilms. The presented work lays the foundation for the development of a lab-on-a-chip for biofilm observation; external photodiodes may be replaced by on-chip photodiodes embedded in a silicon substrate, and other types of sensors may be integrated for detection of molecules indicative of quorum sensing and biofilm growth.

V. CONCLUSION We present a unique microfluidic platform and method for optical monitoring of bacterial biofilms. Biofilm growth

429

within a microfluidic channel is evaluated based on increasing optical density, which was observed to follow the same trends as the biofilm optical thickness. Parallel operation of microfluidic channels allows for simultaneous comparison of biofilms formed under differing conditions. The system was used to compare the growth of wild-type E. coli as well as E. coli incapable of quorum sensing signaling. Biofilms formed by the latter strain exhibited an overall lower optical density and optical thickness. The integrity of both types of biofilm was evaluated by exposing formed biofilms to a high shear rate. The capability of continuous sensing provided by this platform is vital to the monitoring of bacterial biofilm growth, and will aid the development of drugs inhibiting biofilm formation.

ACKNOWLEDGMENT The authors acknowledge financial support from the R. W. Deutsch Foundation and the National Science Foundation Emerging Frontiers in Research and Innovation (NSF-EFRI). The authors also appreciate the support of the Maryland NanoCenter and its FabLab.

REFERENCES 1. Potera C, (1999) Forging a link between biofilms and disease. Science 283:1837–1839 2. Stewart P (2002) Mechanisms of antibiotic resistance in bacterial biofilms. Int J Med Microbiol 292:107-113 3. Janakiraman V, Englert D, Jayaraman A et al, (2009) Modeling growth and quorum sensing in biofilms grown in microfluidic chambers. Ann Biomed Eng 37:1206-1216 4. Prakash S, Abshire P, (2007) On-chip capacitance sensing for cell monitoring applications. IEEE Sensors 7:440-447 5. Richter L, Stepper C et al, (2007) Development of a microfluidic biochip for online monitoring of fungal biofilm dynamics. Lab Chip 7:1723-1731 6. Bakke R, Kommedal R, Kalvenes S, (2001) Quantification of biofilm accumulation by an optical approach. J Microbiol Meth 44:13-26 7. DeLisa M, Valdes J, Bentley W, (2001) Mapping stress-induced changes in autoinducer AI-2 production in chemostat-cultivated Escherichia coli K-12. J Bacteriol 183: 2918-2928 8. Davies D, Parsek M et al, (1998) The involvement of cell-to-cell signals in development of a bacterial biofilm. Science 280:295-298. 9. Andes D, Nett J et al, (2004) Development and characterization of an in vivo central venous catheter Candida albicans biofilm model. Infect Immun 72:6023-6031

Corresponding Author: Reza Ghodssi University of Maryland, College Park Department of Electrical and Computer Engineering College Park, MD USA Email: [emailprotected]

IFMBE Proceedings Vol. 32

Conduction Properties Of Decellularized Nerve Biomaterials M.G. Urbanchek1, B.S. Shim2, Z. Baghmanli1, B. Wei1, K. Schroeder3, N.B. Langhals1, R.M. Miriani1, B.M. Egeland1, D.R. Kipke1, D.C. Martin2 , and P.S. Cederna1 1

2

University of Michigan/Surgery, Plastic Surgery, Ann Arbor, USA University of Delaware/Materials Science & Engineering, Newark, USA 3 Hope College/Literature, Science, and the Arts, Holland, USA

Abstract- The purpose of this study is to optimize poly(3,4,ethylenedioxythiophene) (PEDOT) polymerization into decellular nerve scaffolding for interfacing to peripheral nerves. Our ultimate aim is to permanently implant highly conductive peripheral nerve interfaces between amputee, stump, nerve fascicles and prosthetic electronics. Decellular nerve (DN) scaffolds are an FDA approved biomaterial (Axogen™) with the flexible tensile properties needed for successful permanent coaptation to peripheral nerves. Biocompatible, electroconductive, PEDOT facilitates electrical conduction through PEDOT coated acellular muscle. New electrochemical methods were used to polymerize various PEDOT concentrations into DN scaffolds without the need for a final dehydration step. DN scaffolds were then tested for electrical impedance and charge density. PEDOT coated DN scaffold materials were also implanted as 15-20mm peripheral nerve grafts. Measurement of in-situ nerve conduction immediately followed grafting. DN showed significant improvements in impedance for dehydrated and hydrated, DN, polymerized with moderate and low PEDOT concentrations when they were compared with DN alone (a ” 0.05). These measurements were equivalent to those for DN with maximal PEDOT concentrations. In-situ, nerve conduction measurements demonstrated that DN alone is a poor electro-conductor while the addition of PEDOT allows DN scaffold grafts to compare favorably with the “gold standard”, autograft (Table 1). Surgical handling characteristics for conductive hydrated PEDOT DN scaffolds were rated 3 (pliable) while the dehydrated models were rated 1 (very stiff) when compared with autograft ratings of 4 (normal). Low concentrations of PEDOT on DN scaffolds provided significant increases in electro active properties which were comparable to the densest PEDOT coatings. DN pliability was closely maintained by continued hydration during PEDOT electrochemical polymerization without compromising electroconductivity. Keywords— poly(3,4,-ethylenedioxythiophene), peripheral nerve, decellular nerve, nerve conduction.

I. INTRODUCTION Health care professionals are challenged with enabling stable biological interfaces to currently available prosthetic arm devices which are microprocessor controlled and power out fitted [1]. Ultimately we see amputees using the peripheral nerves remaining in their stump to both control these motorized prosthetics and receive feedback from sensors

located in the prosthetics [2]. Our aim is to permanently implant highly conductive peripheral nerve interface (PNI) connectors between amputee stump nerve fascicles and prosthetic electronics. The purpose of this study is to increase the fidelity of signal transmission across the PNI. Poly(3,4,ethylenedioxythiophene) (PEDOT) is intrinsically an electrical conductor. Acellular muscle when polymerized with the maximal density of PEDOT is shown to have electrical conduction properties similar to copper wire [3]. However materials maximally polymerized with PEDOT acquire a brittleness which is incompatible with coaptation to living peripheral nerve. Decellular nerve (DN) scaffolds are an FDA approved biomaterial used clinically to repair peripheral nerve defects (Axogen™). They are extremely pliable, sized appropriately in diameter, and can be easily sewn to peripheral nerve for long term attachment without breaking off or causing injury to the native nerve. We plan to optimize the process by which PEDOT is polymerized into DN scaffolding. We seek both: a) increased electrical conductivity through DN by testing various concentrations of PEDOT deposition and b) maintenance of a pliable DN after PEDOT deposition.

II. MATERIALS AND METHODS A. Overview of Experimental Design Our purpose is to optimize the electrical fidelity and gain seen when PEDOT is polymerized on DN scaffolding while minimizing the sharp rigidity which accompanies highly conductive but compact concentrations of PEDOT Hypothesis: We hypothesize that electrochemical deposition of PEDOT allows DN scaffolds to remain pliable while it confers improved electro-conductive properties to the scaffold. Bench test and In-situ experimental designs: PEDOT can be deposited onto DN scaffolds by methods that either include dehydration steps (chemical method) [4] or allow scaffolds to remain continuously hydrated (electro-chemical method) [5]. Using each method, various concentrations of PEDOT were polymerized onto DN scaffolds. We tested the DN scaffolds with bench tests which measured impedance

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 430–433, 2010. www.springerlink.com

Conduction Properties of Decellularized Nerve Biomaterials

(fidelity) and cyclic voltametry to determine charge transfer capacity (gain in amplitude). Then, based on the bench tests, we selected the “best” concentrations for dehydrated and hydrated DN scaffolds and conducted in-situ tests for measuring nerve conduction properties (biological signal conductivity) . Rat sciatic nerves were harvested at the University of Michigan and decellularized by Axogen™. The DN scaffolds were polymerized with PEDOT. A chemical polymerization method used an EDOT monomer (Clevios™ M, H.C. Starck, Coldwater, MI) and iron chloride as a dopant [4]. The DN scaffolds must be dehydrated for the PEDOT to adhere. EDOT solutions were made in low (Low), moderate (Mod), and high (High) concentrations which corresponded to the amount of PEDOT deposited. The electrochemical method for PEDOT deposition used a PEDOT polymer and polystyrenesulfonic acid (Clevios™ P, H.C. Starck, Coldwater, MI); DN scaffold dehydration was not needed [5]. Low and Mod concentrations of PEDOT were deposited using the method which allowed constant hydration of the DN scaffolds B. Measurement of Test Material Impedance and Specific Charge Density Electrical impedance spectroscopy (EIS) testing was applied to determine electrode impedance (Frequency Response Analyzer; Version 4.9.007; Eco Chemie B.V.; Utrecht, The Netherlands) and cyclic voltametry (CV) to determine charge transfer capacity (n ~ 4 per 9 groups) [General Purpose Electrochemical System, Version 4.9.007, Eco Chemie B.V., Utrecht, The Netherlands) [5]. Graphs were viewed using MatLab (Version 7.8.0.347 R2009a; MatchWorks, Inc). Materials tested were between 15 and 20 mm in length. Impedance values were sampled for frequencies of 10, 100, and 1000Hz. For CV, a scan rate of 10 mV/s was used and the potential on the working electrode was swept between -1.0 to 1.0 V. Specific charge density was calculated by dividing the charge transfer capacity by each sample’s surface area (surface area of a cylinder). C. Measurement of Nerve Conduction For in-situ measurements, dehydrated (DPEDOT) and hydrated (HPEDOT) DN scaffolds were polymerized with moderate concentrations of PEDOT. Selection of the mod-

431

erate concentration for further testing was based on favorable results from the bench tests. Five experimental groups were tested; these groups were: Intact nerve, Autograft, DN (hydrated as shipped frozen), DPEDOT, and HPEDOT. Using 10-0 nylon suture, DN scaffold materials were sewn to the ends of divided, rat, peroneal nerve as 15-20mm peripheral nerve grafts (n • 5 per 5 groups). Measurement of in-situ nerve conduction immediately followed grafting (Synergy T2X System,Viasys NeuroCare, Madison, WI). Stimulation was applied with a bipolar electrode placed on the nerve proximal to the nerve graft and as close to the sciatic notch as possible. Muscle electromyographic (EMG) responses were recorded with a needle electrode in the extensor digitorum longus muscle located distal to the nerve graft. Reference and ground needle electrodes were placed distal to the recording electrode [6]. Values recorded were EMG response latency, maximal amplitude, and spike area; as well as nerve conduction velocity, rheobase, and the stimulation amperage equal to 20% greater than that used to maximize EMG. D. Graft Stiffness Rating Scale Graft stiffness was rated using a scale from 4 to 0. A score of 4 meant the DN scaffold handled as native nerve; 3 = pliable, slight resistance to bending; 2 = rigid, resistant to needle insertion; 1= brittle, very stiff, cut the suture; and 0 meant a needle could not be placed through the material. E. Animal Care and Compliance Rats used were male Fischer-344 rats which were retired as breeders (Charles River Laboratory, Kingston, NY). All procedures were approved by the Institutional Animal Care and Use Committee of the University of Michigan and were in strict accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals [7]. For all surgical procedures, rats were given an analgesic (buprenorphene, 0.05 mg/kg) prior to anesthesia with sodium pentobarbital (65 mg/kg). All rats were euthanized with an UCUCA approved procedure.

IFMBE Proceedings Vol. 32

432

M.G. Urbanchek et al.

F. Statistical Analysis A one-way analysis of variance (ANOVA) was performed, followed by Tukey’s post hoc test to determine significant differences between experimental groups in the bench test and in the in-situ studies. A p value with a ”0.05 was considered to be significant.

III. RESULTS Data indicate that deposition of PEDOT on DN scaffolds significantly lowers (improves) impedance across all the hydrated as well as all dehydrated DN materials when compared to dehydrated DN with the exception of the Low dehydrated DN scaffold (Fig 2). Specific charge density was increased (improved) only for the Mod dehydrated DN when compared with dehydrated DN (Fig 3.). Some PEDOT broke off the dehydrated DN scaffolds during EIS and CV testing. No charge density could be measured for three of seven scaffolds in the High dehydrated DN group. These zero scores, most likely due to PEDOT cracking and falling off, were included in the statistics and explain the drop in specific charge density seen for this group when compared with the Mod dehydrated DN group. The EIS and CV data taken together may indicate there was a ceiling effect on how much PEDOT was enough. In-situ, nerve conduction measurements demonstrated that hydrated DN scaffolds were poor electro conductors. While intact nerve was best, addition of PEDOT allowed grafted DN scaffolds to compare favorably with the “graft gold standard”, autograft nerve (Table 1). Autograft, DPEDOT, and HPEDOT grafts did not vary from each other.for all EMG data (latency, maximal amplitude, spike area) and nerve conduction measurements (velocity, rheobase, and the stimulation voltage). Statistical power for these findings exceeded ȕ=0.80. Though not significant, greater stimulation amperage is needed to initiate a twitch

response (rheobase) and maximal response amplitude for conduction to pass through the DN scaffold graft with PEDOT when compared with the Autograft. Surgical handling characteristics for the hydrated DN scaffold and the Autograft nerve were rated 4 (as native nerve). The highly conductive HPEDOT DN scaffolds were rated 3 (pliable). DPEDOT DN scaffolds were rated 2 (rigid). The dehydrated DN with High PEDOT group from the bench studies was rated 1 (very stiff). Polymerization of PEDOT by the electrochemical method allowed DN scaffolds to remain hydrated and therefore to behave almost like native nerve during surgery. Our hypothesis said that electrochemical deposition of PEDOT would allow DN scaffolds to remain pliable while PEDOT conferred improved electro-conductive properties to the scaffold. The hypothesis was supported by most of the data. Bench test findings for impedance indicated that PEDOT does confer improvements in conduction fidelity and signal to noise ratio. In-situ tests showed that PEDOT deposition on DN facilitated biological signal conduction across a nerve gap. However, our bench cyclic voltametry results did not show convincing improvements in charge transfer capacity (signal gain) for the PEDOT coated DN scaffolds. The electrochemical polymerization process did allow the DN scaffolds to remain pliable following polymerization with PEDOT.

VI. DISCUSSION Bench tests measured improvements in impedance for DN scaffolds polymerized with PEDOT. Lower impedance indicated the electrical signal has better fidelity or signal to noise ratio. Devices with low impedance generally have lower overall power requirements leading to extended battery life. Decreased impedance is a favorable quality for a PNI scaffold. Specific charge density measurements did not increase significantly for all PEDOT coated DN scaffolds except for the moderately high coated dehydrated PEDOT DN group. Charge density is thought to determine charge transfer capacity or gain. A PNI needs to contribute some as yet unknown quantity of charge transfer. PEDOT is known to accumulate greater charge density because the fluffy PEDOT structure increases surface area (Fig 1). However, too much specific charge density could damage the native peripheral nerve. The meaning and benefits of charge density need further study. DN scaffolding alone, although not a good electrical conductor, may be a fine base material for a PNI. Peripheral nerves grow through it and, though it is a xenograft mate-

IFMBE Proceedings Vol. 32

Conduction Properties of Decellularized Nerve Biomaterials

rial, inflammation and immune response to it are minimal [8]. This is the first study to show that addition of PEDOT to DN allowed action potential type signals to pass across a 15 to 20 mm nerve graft. A 15 mm distance is the desired length for a PNI. Higher stimulation was needed to initiate a twitch response and a maximal response across the graft because signals are obstructed by scarring at two nerve to graft coaptation sites. Whether the “biologic like” signals across the graft were purely electrical, ionic, or a mixture of the two was undefined.

433

Low concentrations of PEDOT on DN scaffolds can provide significant increases in electro active properties which are comparable to maximal High PEDOT coatings. DN pliability is closely maintained by continued hydration during PEDOT electrochemical polymerization without compromising electro conductivity.

ACKNOWLEDGEMENTS The views expressed in this work are those of the authors and do not necessarily reflect official Army policy. This work was supported by the Department of Defense Multidisciplinary University Research Initiative (MURI) program administered by the Army Research Office under grant W911NF0610218. Conflict of interest statement: No potential conflicts of interest are disclosed.

REFERENCES 1. 2. 3. 4.

The bench studies and the in-situ nerve conduction studies indicated that moderate concentrations of PEDOT on DN were enough to reduce impedance and facilitate conduction through the DN scaffold. Reduced PEDOT concentrations along with the electrochemical deposition allowed the DN scaffold to remain pliable. Still slightly higher concentration of PEDOT in the hydrated samples should be possible. Implanting this DN scaffold as part of a PNI is a realistic goal. Pliability allows DNs to move with the peripheral nerve endings rather than breaking off or injuring surrounding tissues. This study was an acute in situ study and, therefore, carries certain limitations as the DN scaffolds were not maintained in-vivo as true PNIs would be. One cannot predict whether biological defenses would lead to degradation or encapsulation of the DN PNI materials without running a long term implant study.

5.

6.

7. 8.

Kuiken TA, Li G, Lock BA et al. (2009) Targeted Muscle Reinnervation for Real-time Myoelectric Control of Multifunction Artificial Arms. JAMA. 301(6):619-628. Frost CM, Urbanchek MG, Egeland BM, et al. Development of a Biosynthetic “Living Interface” with Severed Peripheral Nerve. (2009) Plast Reconstr Surg, 123(6S), p 12. Egeland BM, Urbanchek MG, Abidian MR, et al. (2009) A TissueBased Bioelectrical Interface has Reduced Impedance Compared to Copper Wire and Nerve. Plast Reconstr Surg, 123(6S), p 26. Peramo A, Urbanchek MG, Spanninga SA, et al. (2008) In situ polymerization of a conductive polymer in acellular muscle tissue constructs. Tissue Engine (A): 14(3):423-32. Richardson-Burns SM, Hendricks JL, Foster B, et al. Polymerization of the conducting polymer poly(3,4-ethylenedioxythiophene) (PEDOT) around living neural cells. Biomaterials (2007) 28:1539-1552) Urbanchek MG, Egeland BM, Richardson-Burns SM, et al.. In Vivo Electrophysiologic Properties of poly(3,4ethylenedioxythiophene) PEDOT in Peripheral Motor Nerves. Plast Reconstr Surg, 123(6S), p 89. Institute of Laboratory Animals Resources. Guide for the Care and Use of Laboratory Animals. 7th ed. Washington, DC: National Academy Press; 1996. Whitlock EL, Tuffaha SH, Luciano JP, et al. Processed Allografts and Type I Collagen Conduits for Repair of Peripheral Nerve Gaps. (2009) Muscle Nerve 39:787-99.

Melanie G. Urbanchek, PhD University of Michigan 109 Zina Pitcher Place BSRB 2023 Ann Arbor, MI 48109-2200 USA [emailprotected]

V. CONCLUSIONS IFMBE Proceedings Vol. 32

Reverse Cholesterol Transport (RCT) Modeling with Integrated Software Configurator S. Adhikari Sysoft Center for Systems Biology and Bioengineering, Hunterdon, NJ

Abstract— Reverse Cholesterol Transport (RCT) is a series of very complex biological pathways by which accumulated cholesterol is transported from the vessel wall macrophages and foam cells to the liver for excretion, thus preventing atherosclerosis, a build-up of plaque in the arteries often referred as ‘hardening of arteries.’ Cardiovascular disease (CVD) is the leading cause of death in US and other developed nations costing the American healthcare system in excess of $450 billion per year. The underlying cause for CVD is atherosclerosis. There is a paradigm shift coming in CVD research and drug development. Atherosclerosis will not just be managed, but can ultimately be eliminated. In this paper we describe the dynamic RCT model that quantifies the clinical effects of reverse cholesterol efflux. RCT has emerged in recent days as one of the most desirable methods of medical interventions to reverse the atherosclerotic lesions. Optimized dynamic RCT modeling helps in therapeutic targeting of High Density Liporoteins (HDL) with the help of ApoA-1 Mimetic Peptides, and other oral small molecules. The net RCT pathway is quantified with multiple parameters that can change depending on clinical in-vivo or in-vitro conditions. A standard relational database model holds all the objects of the quantitative model. An optimization algorithm matches the aggregate model with the clinical RCT datasets. It automatically adjusts all the parameters till it can find the best solution. Multivariate analysis (MVA) aims to create a derived aggregate model reducing the complexity of multidimensional data to a few latent variables that express the majority of the variance of the data set. MVA is also utilized to perform nonlinear multiple regression analysis between large data sets. Our dynamic model can interfaces with external data sets and other models using Systems Biology Markup Language (SBML). It is a computer-readable format for representing models of biochemical reaction network in software. Keywords— RCT, Cholesterol Efflux, Lipid Metabolism, HDL, Atherosclerosis.

I. INTRODUCTION Reverse Cholesterol Transport (RCT) is a series of very complex biological pathways by which accumulated cholesterol is transported from the vessel wall macrophages and foam cells to the liver for excretion, thus preventing

%LOH

WHURO LV WUDQVSRUWHG IURP/LYHU WKH YHVVHO ZDOO PDFURSKDJHV D IRDPFHOOVWRWKHOLYHUIRUH[FUHWLRQWKXVSUHYHQWLQJ 65%

/&$7

DSR$

3/7

&(7

+'/

+'/

QG+'/

/'/5

&(

5HF\FOLQJRI$SR( (IIOX[HGFKROHVWHUROIURPPDFURSKDJHV

Fig. 1 Reverse Cholesterol Transport (RCT) QG+'/

/LYHU

PDWXUH+'/ (IIOX[HGFKROHVWHURO

([WUDFHOOXODU 6SDFH

$%&*

&\S$ &DYHROLQ

SDVVLYH GLIIXVLRQ

0DFURSKDJH

65%

$%&$ /;5

,QWUDFHOOXODUFKROHVWHUROSRRO

Fig. 2 Cholesterol Efflux atherosclerosis, a build-up of plaque in the arteries often referred as ‘hardening of arteries.’ Cardiovascular disease (CVD) is the leading cause of death in US and other developed nations costing the American healthcare system in excess of $450 billion per year. The underlying cause for CVD is atherosclerosis. There is a paradigm shift coming in CVD research and drug development. Atherosclerosis will not just be managed, but can ultimately be eliminated. Detailed instructions for preparing the papers are listed in chapter II. WRITING THE PAPER. When you write the paper, you can either follow the descriptive rules presented in subchapter A. Descriptive rules, or install the macros

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 434–436, 2010. www.springerlink.com

Reverse Cholesterol Transport (RCT) Modeling with Integrated Software Configurator

prepared by the publisher, as described in subchapter B. Using macros.

II. RCT DYNAMIC MODELING Statin therapies – the current standard of care can only prevent the disease from progressing. RCT is the basis of new cardiovascular therapeutics that can reverse atherosclerosis. Major constituents of RCT include acceptors such as high density lipo-protien (HDL) and apoliproprotein A-1 (ApoA-1), and enzymes such as lechtin:cholesterol acyltransferase (LCAT), Phospholipid transfer protein (PLTP), hepatic lipase (HL), and cholesterol ester transfer protein (CETP). In addition to traditionally recognized transport pathways, RCT also takes place through passive diffusion, protein-facilitated diffusion, and complex mechanisms involving membrane micro-solubilization. On top of that, RCT is facilitated by other apoliproproteins such as ApoE, and ApoM. They are required for HDL formation, maturation, and consequent enhancement of RCT performance. ApoA1, ApoE, and ApoM recycle to some extent augmenting enhanced RCT performance. ApoA-1 is the main apoliproprotein in RCT. The main pathway involves docking of lipid free ApoA-1 into ABCA1 transporter, transfer of phospholipids, and free cholesterol into ApoA-1, and further chlesterol transfer through ABCG1 transporter. Multiple HDL3s fuse to form HDL2. Figure 1. shows some of the RCT pathways. Figure 2. shows the detailed cholesterol efflux process that forms the initial step in RCT. Realistic quantitative modeling of RCT is extremely difficult through conventional methods. It requires a software configurator aided, database driven systems biology platform that can use the complex series of dynamic solution parameters, boundary conditions, and use stochastic optimization, multivariate, and multiple regression techniques to match the results from one or more clinical trials. It seeks to integrate nonlinear dynamic interacts of many components and pathways. RCT has emerged in recent days as one of the most desirable methods of medical interventions to reverse the atherosclerotic lesions. Optimized dynamic RCT modeling helps in therapeutic targeting of High Density Liporoteins (HDL) with the help of ApoA-1 Mimetic Peptides, and other oral small molecules. We have considered many known and clinically verified RCT pathways. Each individual quantitative model, with dynamic parameters, boundary conditions, and other variables has become part of a RCT quantitative model database accessible to the software configurator. Some of these models are kinetic models developed through clinical

435

testing. In many cases, we had to adopt appropriate mathematical models using analytical and numerical methods. For example, one of the ways ApoA-1 removes cholesterol is by diffusion via a concentration gradient between the membrane cholesterol donor and the acceptor particle. Cholesterol molecules spontaneously desorb from plasma membrane, diffuse through the aqueous phase, and are then incorporated onto HDL particles by simple collision. Our quantitative model treats the solute concentration in the space surrounding a particle as a function of the distance and time. The diffusion process is defined as the amount of solute diffusing in the direction of the solute concentration gradient per unit area of a cross section lying perpendicular to the gradient. Application of appropriate boundary conditions provides the solution. The quantitative model follows the following equation for rate of change in concentration dc/dt and k (defined as the amount of solute diffusing in the direction of the solute concentration gradient per unit area of a cross section lying perpendicular to the gradient): k=D(dc/dρ) dc/dt = D[d2c/dρ2 + (2/ρ)dc/dρ] D is the aggregate diffusion coefficient, and ρ is radius from the center. The following boundary conditions apply: (a) c(ρ,0) = cbulk, the concentration at “infinite distance” or bulk concentration; (b) c(r,t) = csolubility , the solute concentration at saturation; and (c) c(∞, t) = cbulk. The solution (assuming constant r and cbulk) is: c =s + (cbulk – s)[1-(r/ρ) (1-erf((ρ –r)/(2Dt)1/2))] which when t is infinity (i.e., end of transition state), reduces to: c = s + (cbulk – s ) [1 – (r/ρ)] The RCT quantitative model uses many complex models including kinetics to evaluate cholesterol efflux from the macrophages to ApoA-1 via ABCA1 and ABCG1. The model is enhanced by the quantitative RCT effects of Caveolin, Sterol 27-hydroxylase (CYP27A1), scavenger receptor SR-B1 transport processes, and ABCG5/G8 hepatobiliary and intestinal sterol extraction gene. In addition, the aggregate model also considers the effect of ApoA-1, Apo-E, Apo-M recycling. Apo-E recycling in hepatocytes is found to be important in enhancing the selective uptake of HDL cholesteryl esters through SR-BI scavenger receptors. Additionally, recycling Apo-E may also serve as a chaperone for proper targeting and repositioning of recycling LRP1 or other receptors to the cell surface. In macrophages, complex models manifest cholesterol efflux

IFMBE Proceedings Vol. 32

436

S. Adhikari

through Apo-E recycling. The aggregate model also quantifies the Apo-E recycling acting as a biosensor of cholesterol entry and exit pathways to help the cells avoid the dangers of cholesterol accumulation and depletion. ABCA1 and ABCG1 coordinate the removal of excess cholesterol macrophages using a diverse array of lipid acceptor particles. Cholesterol efflux is dependent on the phospholipid composition of HDL. PON1 enhances HDLmediated macrophage cholesterol efflux via ABCA1. Smaller, denser HDL3 possesses higher cholesterol efflux capacity. Interactions of ApoA-I with ABCA1 deteriorate with age affecting the capacity of HDL3 to mediate cholesterol efflux. ABCG5/G8 affects hepatobilliary and intestinal sterol excretion, the last RCT step.

III. CONCLUSIONS The net RCT pathway is quantified with multiple parameters that can change depending on clinical in-vivo or in-vitro conditions. A standard relational database model holds all the objects of the quantitative model. An optimization algorithm matches the aggregate model with the clinical RCT datasets. It automatically adjusts all the parameters till it can find the best solution. Multivariate analysis (MVA) aims to create a derived aggregate model reducing the complexity of multi-dimensional data to a few latent variables that express the majority of the variance of the data set. MVA is also utilized to perform nonlinear multiple regression analysis between large data sets. Our dynamic model can interfaces with external data sets and other models using Systems Biology Markup Language (SBML). It is a computer-readable format for representing models of biochemical reaction network in software.

ACKNOWLEDGMENT The author acknowledges the contributions and efforts of Sysoft Software and Bioengineers in this project.

REFERENCES 1. Adhikari S, (2010) Mathematical Modeling of the Effects of Increased ApoA-1 Transcription and Subsequent Protein Synthesis on Reverse Cholesterol Transport. Arteriosclerosis, Thrombosis, and Vascular Biology 2010 Scientific Sessions of American Heart Associaion, San Fracisco. 2. Adhikari S, (2010) Hemodynamic Analysis of the Effects of Exercise on Plaque Formation, Stability, and Reversal, Arteriosclerosis, Thrombosis, and Vascular Biology 2010 Scientific Sessions of American Heart Associaion, San Fracisco. 3. Adhikari S, (2010) Configurator based dynamic modeling for reverse cholesterol transport, Kinmet 2010, San Fracisco. 4. D.J. Rader (2006) Molecular regulation of HDL metabolism and function: implications for novel therapies, J Clin Invest, 116, pp.3090–100. 5. N. Mukhamedova, G. Escher, W. D’Souza, et al.,(2008) Enhancing apoliprotein A-I dependent cholesterol efflux elevates cholesterol export from macrophages in vivo, J Lipid Res, 49, pp.2312–22. 6. M. Pennings, I. Meurs, D. Ye, et al., (2006) Regulation of cholesterol homeostasis in macrophages and consequences for atherosclerotic lesion development, FEBS Lett , 580, pp.5588–96. 7. M. Navab , G.M. Anantharamaiah, S.T. Reddy, et al., (2006) Mechanisms of disease: proatherogenic HDL—an evolving field, Nat Clin Pract Endocrinol Metab, 2, pp.504–11. 8. M. Cuchel, D.J. Rader, (2006) Macrophage reverse cholesterol trans port: key to the regre ssion of atherosclerosis? Circulation, 113, pp.2548–55. Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Sam Adhikari Sysoft Center for Systems Biology and Bioengineering P.O. Box 219 Whitehouse Station USA [emailprotected]

Modeling Linear Head Impact and the Effect of Brain-Skull Interface K. Laksari, S. Assari, and K. Darvish Temple University, Department of Mechanical Engineering, Philadelphia, USA Abstract— The aim of this research was to simulate a sever linear impact to the head and study the effect of impact on a brain substitute material in terms of deformations that generally lead to Traumatic Brain Injury (TBI). Simplified 2D models of transverse section of human brain were made with 5% gelatin as brain surrogate material. The models underwent 55G deceleration and the strain distribution was measured through image processing. Finite element (FE) models of the experiments were developed using Lagrangian formulations and validated. Using physical material properties, the FE computational parameters were determined based on the results of strain distribution and posterior gap generation. The strain and pressure levels in the FE model both reach the injury threshold levels known for brain tissue. Keywords— Traumatic Brain Injury, Head Impact, Finite Element Model.

I. INTRODUCTION Traumatic Brain Injury (TBI) is one of the major causes of fatality and disability worldwide with 5.3 million people living with a disability related to TBI only in America [1]. The mechanisms of brain injury from an engineering perspective are those measurable physical quantities and processes that lead to functional and/or material failure in various tissues of the central nervous system. To calculate internal stress, strain and pressure at all locations and at any given instant of time during an impact a finite element model of brain that can accurately capture the interaction between brain and skull is needed. Several such models have been developed in the past [e.g., 3]. In this study, two-dimensional physical and FE models of human head under linear deceleration were developed with the focus of studying the modeling of brain-skull interface and its effect on brain deformation. The aims were a) to measure the brain surrogate material deformations experimentally with various interface conditions and b) to validate finite element models of the experiments using physical viscoelastic material properties.

II. MATERIALS AND METHODS 5% gelatin was used as the brain substitute material. The dynamic viscoelastic material properties of the gel were

determined from shear tests and are comparable to brain material properties (Table 1) [3]. A simplified 2D physical model of the human head in the shape of a hollow disk with 100 mm diameter and 20 mm thickness was made. To satisfy the 2D assumption, the disk was sealed at the top and bottom and displacements perpendicular to the plane of motion (vertical) were prevented. In the meantime, by wetting the surfaces of the gel to reduce friction, the gel could freely deform in the horizontal plane. The disk was mounted on a high speed track and crashed into a shock absorber, which creates impacts with constant decelerations between 30-70G [4]. A deceleration of 55G was chosen, which corresponds to a crash at about 30 mph with a HIC (Head Injury Criteria) of 700 which corresponds to the threshold of experiencing significant head injury [5]. The Gel deformation during impact was quantified by tracking photo targets at 2200 frames per second (Phantom, v4.2) with an accuracy of ±0.2mm (Figure 1).

Posterior Region

Fig. 1 Maximum deformed shape in the experimental and corresponding FE models with 1 mm gap Two categories of experiments were conducted in order to reach a better understanding of the brain-skull interaction and the deformation of brain during an impact: a) The gel was immersed in water, which completely filled the cylinder, representing the case where CSF has filled all the voids in the skull and has no room to leave the skull, b) There was initial gap, ranging from 0 to 2 mm, between the cylinder and the gel. This was done to

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 437–439, 2010. www.springerlink.com

438

K. Laksari, S. Assari, and K. Darvish

investigate the effect of existence and also the amount of such initial gap in causing large deformation of brain tissue. The gel impact test with slip boundary condition was modeled in LSDyna (LSTC, CA) with Lagrangian formulation, single point integration solid elements, and soft contact algorithm. The gel physical viscoelastic material properties were implemented in MAT_GENERAL_ VISCOELASTIC formulation (Table 1) and the skull was assumed as rigid. For model validation the parameters that were changed were the Poisson’s ratio and the hourglass type and coefficient.

In the case where the cylinder is filled with water, the relative acceleration of brain with respect to skull is related to the difference in their densities which was very small (both ≈ 1000 kg/m3) and resulted in negligible strains. However, the change in pressure could potentially be an important cause of brain injury. The maximum pressure present in the frontal region of the gel was around 100 kPa, comparable to the peak pressure results given by Nahum (80 to 200 kPa) [10]. 0.3

Table 1 Material properties of the surrogate brain material (5% gelatin)

G (t ) = G∞ + Σ Gi e− βit i =1

G1 = 69.49 Pa G2 = 104.96 Pa G3 = 114.32 Pa G4 = 761.4 Pa G∞ = 130.72 Pa

FE

0.2 Strain

4

Experiment

0.25

0.15 0.1

-1

β1 = 0.1 s β 2 = 1.0 s-1 β 3 = 10 s-1 β 4 = 100 s-1

0.05 0 0

0.02

0.04

0.06

0.08

Time(sec)

Fig. 2 Effective strain in the posterior region

III. RESULTS AND DISCUSSION

8

FE Experiment

7 6 Gap (mm)

In the experiments, when the cylinder was filled with gel (no gap) or water filled the remaining gap between the cylinder and the gel, no significant deformation was observed. This can be explained by the incompressibility of water and gel and also agreed with the FE results. In the experiments with initial gap, significant gel deformations were observed and the FE models were validated against experimental data. Effective strain distribution (Figure 2) and the boundary deformation (Figure 3) were used for this validation. The figures are plotted for 1 mm initial gap and the model results are calculated with Poisson’s ratio equal to 0.4995. Both curved show reasonable agreement between the model and experimental results. A rigorous quantitative assessment of this validation is underway. For hourglass control, the Flanagan-Belytschko stiffness form with exact volume integration for solid elements (HQ = 0.15) was found to work best for this problem. The default soft contact penalty factor (0.1) was sufficient to avoid instability. The strain levels in the FE simulations reached more than 25% that is reported in the literature as the threshold of injury [6]. A comparison between the displacement data (in center of the cylinder) with those determined by Hardy et al. in head impact experiments [9] show that they agree in terms of the maximum and minimum relative displacements (2 to 5 mm).

5 4 3 2 1 0 0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

Time(sec)

Fig. 3 Maximum boundary deformation The next step in this research would be to measure the pressure change in various locations experimentally and compare with the FE simulation results. Also in reality, since brain motion is somewhat constrained at its attachment to the spine and also by the trabeculae in the subarachnoid space, the experimental model needs to be modified to incorporate such constraints. This can be accomplished, for example, by fixing the gel in an offcenter region to the cylinder. Such constrains can play a crucial role in increasing the local shear strains. The results of this study can be used to develop three dimensional FE models of the brain for which experimental validation data are difficult or costly to achieve. The main difference will be the geometry that can be generated from CT-scan or MRI data. The material properties and other FE parameters will be the same.

IFMBE Proceedings Vol. 32

Modeling Linear Head Impact and the Effect of Brain-Skull Interface

A major limitation of this study is using a hom*ogeneous and isotropic material for brain. Various studies show that brain is inhom*ogeneous and anisotropic [7, 8]. Whether inhom*ogeneity of brain (e.g., white matter and gray matter) can cause additional shear stress at their interface, or highly oriented nerve fibers in corpus callosum and the brain stem can lead to higher shear stresses are important questions that will require more elaborate models to study.

REFERENCES 1. National Center for Injury Control and Prevention Website (accessed 2008) http://www.cdc.gov/ncipc/tbi/TBI.htm. 2. Yang, K.H., and King, A.I. (2003) “A limited review of finite element models developed for brain injury biomechanics research”, Int. J. Vehicle Design, Vol. 32, Nos. 1/2, pp. 116-129. 3. Laksari K., Darvish K., (2009). "Brain Deformations in Linear Head Impact", Proceeding of ASME International Mechanical Engineering Congress and Exposition, Lake Buena Vista, FL 4. Shafieian, M., Darvish, K., Stone, J. R., 2009, Changes to the Viscoelastic Properties of Brain Tissue after Traumatic Axonal Injury, Journal of Biomechanics, Vol. 42, pp. 2136–2142. 5. Mertz H.J., and Prasad P. (1997) “Injury Risk Curves for Children and Adults in Front and Rear Collisions”, Technical Report by H.J. Mertz, General Motors, P. Prasad, Ford Motor Co. and A.L. Irwin, General Motors, #9733 18.

439 6. Bain A., Meaney D., (2000) “Tissue-Level Thresholds for Axonal Damage in an Experimental Model of Central Nervous System White Matter Injury” Journal of Biomechanical Engineering, Vol. 122, Issue 6, 615. 7. Arbograst K.B. and Margulies S.S. (1998) “Material Characterization of the Brainstem from Oscillatory Shear Tests” Journal of Biomechanics, Vol. 31, 801-807. 8. Prange M., Margulies S., (2000) “Defining Brain Mechanical Properties: Effects of Region, Direction, and Species”, Stapp Car Crash Journal, 44:205-13. 9. Hardy W., Mason M., Foster C., Shah C., Kopacz J., Yang K., and King A., (2008) “A Study of the Response of the Human Cadaver Head to Impact”, Stapp Car Crash Journal. 10. Nahum A., Smith R., Ward C., (1977) “Intracranial Pressure Dynamics during Head Impact”, Stapp Car Crash Journal.

Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Kurosh Darvish, PhD Temple University 1947 N. 12th Street Philadelphia USA [emailprotected]

Mechanics of CSF Flow through Trabecular Architecture in the Brain Parisa Saboori, Catherine Germanier, and Ali Sadegh Dept of Mechanical Engineering, The City College of The City University of New York Abstract— The importance of the subarachnoid space (SAS) and the meningeal region, which provides a restraining interface between the brain and the skull during a coup/ countercoup movement of the brain, has been addressed in the literature, [9,10,11]. During a contact or non-contact (angular acceleration) of the head, which is due to vehicular collisions, sporting injuries and falls, the brain moves relative to the skull thereby increasing the contact and shear stresses in the meningeal region leading to traumatic brain injuries (TBI). Previous studies have over simplified this region by modeling it as a soft solid, which could lead to unreliable results. The biomechanics of the SAS has not been addressed in the literature. In this paper the mechanotransduction of the cerebrospinal fluid (CSF) through the SAS has been investigated. This is accomplished through a proposed analytical model and finite element solution. The results indicate that Darcy’s permeability is an appropriate model for the SAS and the proposed analytical model can be used to further investigate the transduction of mechanical and hydrodynamic forces through the SAS. Keywords— Head impact, Subarachnoid Space, analytical modeling, CSF flow, Darcy permeability.

I. INTRODUCTION In an accident, the human head being a vulnerable body region is most frequently involved in traumatic brain injuries (TBI) and life threatening injuries. The anatomy of the head reveals that the brain is encased in the skull and is suspended and supported by a series of fibrous tissue layers, dura mater, arachnoid, trabeculae and pia mater, known as the meninges. In addition, cerebrospinal fluid (CSF) located in the space between the arachnoid and pia mater known as subarachnoid space (SAS) stabilizes the position of the brain during head movements. To explain the likely injury process of the brain and to quantify the response of the human head to blunt impacts, investigators have employed experimental, analytical and numerical methods. Many researchers have used the finite element (FE) method to study head/brain injuries, [ 1, 13, 7, 17, 18, 19]. The complicated geometry of the SAS and trabeculae makes it impossible to model all the details of the region. Thus, in these studies and other similar studies, the meningeal layers and the subarachnoid region have been simplified as a soft elastic material or in some cases as water (i.e. soft solid having bulk modulus of water and very

low shear modulus), e.g., [7, 17, 18]. That is, the hydraulic damping (i.e. the fluid solid interaction) and the mechanical role of the fibrous trabeculae and the cerebrospinal fluid (CSF) in the subarachnoid space (SAS) were ignored. These simplifications are due to the complex geometry and random orientation of the trabeculae. In addition to the simplified models, the mechanical properties of SAS are not well established in the literature. A few studies, [6, 16, 17, 18], have reported a wide range of elastic modulus of trabeculae up to three orders of magnitudes. As indicated, SAS that includes CSF and the trabeculae has a complicated geometry. This is due to abundance of trabeculae in the form of rods (fibers) and thin transparent plates extending from the arachnoid (subdural) to the pia mater. The pia mater adheres to the surface of the brain and follows all its contours including the folds of the cerebral and cerebellar cortices. This gives the subarachnoid space a highly irregular shape and makes the distribution of CSF around the brain non-uniform. The volume of CSF is highest within the cisterns regions of the brain where, due to the shape of the brain surface, the subarachnoid space is large. Arachnoid trabeculae are more concentrated in the subarachnoid cisterns, sometimes even coalescing into membranes that partially occlude the subarachnoid space. This correlation between the CSF and the trabeculae suggests that their functions are not independent. These fluid and solid phases work together to mechanically support the brain. While the functionality of the SAS is understood, the histology and biomechanics of this important region has not been fully investigated. It is understood, however, that the arachnoid is a thin vascular layer composed of layers of fibroblast cells interspersed with bundles of collagen, and the trabecula is also a collagen based structure. Only the histology of the trabeculae of the optical nerves has been studied, [6]. In our previous in-vitro and in-vivo (animal) studies [9,10,11] the histology and the architecture of the trabeculae were investigated. Specifically, we employed a micro CT scan and Mimics Software and studied the 3D random structure of the trabeculae. In addition, the solidified samples of the brain tissues were sliced using vibratome, and were died through standard procedure for fluorescent and con-focal microscopy. Finally, an in vivo experiment was performed

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 440–443, 2010. www.springerlink.com

Mechanics of CSF Flow through Trabecular Architecture in the Brain

using Sprague-Dawley rat. The rat was anesthetized with pentobarbital sodium given subcutaneously and then, the right atrium was incised. With a sharp needle, pre-fixative solution PBS (Phosphate buffered saline) was injected to the left ventricle, followed by the fixative solution, the subarachnoid space was solified. Finally, after a few minutes, through the blood circulation, the blood vessels of the SAS were solidified. Then, the animal was sacrificed and a sample of the brain tissue was prepared for the Scanning Electron microscopy, see [10,11]. While there have been many finite element studies of the brain/head models there are limited analytical models. The goal of the present paper is to mathematically model subarachnoid space (SAS) and to investigate the biomechanics of CSF flow through trabecular architecture in the subarachnoid space.

II. MECHANICS OF CSF FLOW THROUGH THE TRABECULAE In this section, we propose a structural model for the analysis of mechanical transduction of CSF’s hydrodynamic forces through the SAS. Consider a transverse and/or lateral slice of the head where the brain is encased by the SAS. For simplicity of the analytical model we assume that the SAS is a uniform strip of a continuum around the brain. When the head is subjected to an impact the deformation of the brain at the coup location causes the CSF to flow around the brain and stagnates at the countercoup location. Assume that at the CSF’s stagnation point (countercoup) the SAS is cut, then the strip band around the brain is straightened out as a long channel. Therefore, the mechanics of the CSF flow through the SAS can be simulated as the flow of the CSF through a deformable channel (a long strip) when the plane of the channel is subjected to an impact or deformation. Based on the anatomy of the head and brain, the dimensions of the deformable channel are: 3 mm thick, 10 mm wide and 600 mm long, as shown in Fig. 1. 0.003m y 0.6m

x

441

These material models were compared/validated with the experimental results of [2] who applied a mild angular acceleration to three human subjects and measured the strain in the brain. It was concluded that the results of the Darcy’s model were in good agreement with the experimental results, and thus was selected. In addition, our in-vivo and in-vitro studies revealed that abundant trabeculae exist in the SAS region, which create a hydrodynamic resistance force against the CSF flow similar to Darcy’s permeability model. Structural Model: We propose a hexagonal structural (unit cell) model for the structural organization of the trabecular architecture, Fig. 2. While this is an ordered structure it provides a base to formulate a mathematical model for analyzing the transduction of mechanical and hydrodynamic forces through the SAS. It is also assumed that each trabecula is a fiber (rod) with a circular cross-section of radius a connecting the arachnoid to the pia mater. This structural model provides a reasonable prediction of the SAS permeability. Using Darcy’s permeability k is estimated by

It is assumed that the flow of the fluid through the deformable channel is governed by Darcy’s permeability law. The reason for this choice is that in our previous studies [9] several material models of the SAS including; Darcy’s permeability, viscous fluid, fluid solid interaction and poroelstic materials, were considered and analyzed.

Cε 3 μs2

[14] were C=1/2 for circular void, ε = void volume/ total volume, μ= dynamic viscosity and S = total reference area / total volume.

Fig. 2 Hexagonal structural model of trabeculae Based on the experimental results, it is estimated that the radius of the fiber is approximately 5 microns and the fiber gap (spacing between two fibers) is ∆=40 microns. Using the hexagonal geometry the void volume/ total volume ε, and total reference area / total volume S, can be written as,

0.01m

Fig. 1 The channel (strip) model

k=

Therefore, using these equation the SAS permeability is Kp = 3.125e-10 , which is in the range of permeability for soft tissue in fluid media, [15]. Mathematical formulation: Consider a long channel containing trabeculae and the CSF, shown in Fig. 1 and is subjected to a transverse load (q(x, t) or displacement on the top surface. The governing Darcy’s law is,

IFMBE Proceedings Vol. 32

442

P. Saboori, C. Germanier, and A. Sadegh

,

,

(1)

where u (x,t) is the velocity and p is the pressure p of the CSF. The continuity equation is:

,

·

,

(2)

Where h(x,t) is the height (thickness) of the channel. Finally, the balance of the forces lead to : ,

,

·

(3)

,

·

,

In this case we assumed q(x,t) = 0. Utilizing several mathematical manipulations and change off variables the solution to the three coupled differential equaations are:

To solve for the constants C1, C2 and C3 3 the following boundary conditions were used.

Fig. 3 Comparing the analytical and FE results Trabecular buckling and recoil: Onnce the deformation of the proposed strip model of the SAS is kknown, then the buckling and recoil of the unit cell fiber (trabbecula) is investigated. A 2D unit cell FE model was createdd using MRI images, and the velocity boundary condition waas applied and ABAQUS software was employed and the dissplacement history of the trabecula was determined. As show wn in Fig. 4 the maximum displacement of the fiber was 2.6e-4 m m.

0, 0.3,

0.3

0.3,

0.003

0.3

The constants were determined as: C2= - 0.3 3, 1 2 2 0.09

Fig. 4 Trabeculae buckling due to velocity input BC in different time step 0.3

Analytical approach: The hexagonaal unit cell model that is represented by a single fiber w was employed and was subjected to a non-uniform velocityy profile as shown in figure 5. Based on that profile, a drag force profile was applied on the trabeculae. Thiss profile is given by:

0.003

· ·

The input velocity, corresponding to a blu unt impact was used, and the results were compared to FE so olution. Finite Element approach: The three dimenssional model of Fig. 1 was created and was subjected to the same boundary condition and was solved using ABAQUS. Due D to the space limitation the FE model and the results aree not presented. However, the results of both analytical an nd FE methods were compared and they were in good agreements, as shown in Fig. 3.

·

where,

·

is the radius of the trabeeculae,

· · √ ·

·

is the fiber volume fraction of the trrabeculae in the periodic unit. and have the same vvalue as in the previous section (the strap model.) Assum ming that , is the instantaneous local fiber velocity aand , is the local

IFMBE Proceedings Vol. 32

Mechanics of CSF Flow through Trabecular Architecture in the Brain

. Then using the deflection of the fiber, we can write similar approach presented in [5,15] for small elastic deflections, the viscoelastic recoil of a trabeculae is determined by the following equation:

To solve this equation we first introduce some dimensionless

/ ,

variables, and

/ .

,0 ,

Note that

coefficients of the velocity term,

/ and

with are the

is the height of the SAS,

, 0 is the maximum deflection at the beginning of and our analysis . The dimensionless equation to solve is then reduced to . For the boundary conditions we assume that there is no deflection at the top and bottom of the SAS, that the slope at the top of the SAS is zero and that there is no shear force in the middle of the fiber: 0,

1,

1 2,

0,

After some manipulation the solution of the differential equation is expressed as:

, where fi’s are unknown time-dependent functions that are determined through the boundary conditions. The results are shown in Fig. 6, which indicates that it takes approximately 19-20 milliseconds for the fiber to come back to its original shape in the SAS region.

Fig. 5

Fig. 6

III. CONCLUSION The aim of this study was to find a suitable model for SAS region to be utilized in our global head model where we investigate the strain in the brain due to contact and/or noncontact head accelerations (impacts). Several models, soft solid, viscous fluid, Darcy’s permeability model and porous elastic models were investigated. It was determined that Darcy’s model is a realistic representation of the SAS region and can explain the hydrodynamic forces of CSF through the SAS region.

443

In this study we proposed a hexagonal structural (unit cell) model for the structural organization of the trabecluar architecture. In addition, for the mathematical formulation we assumed that the SAS is a uniform strip of a continuum around the brain. The mathematical formulation of the strip model was based on: Darcy’s permeability, continuity equation and the balance of force equation. The solution of the analytical model was in good agreement with the FE solution. Finally, the buckling and recoil of the unit cell fiber was formulated and solved analytically and compared with the FE solution. The results of this study confirm the validity of the proposed structural unit cell model and the results of the analytical solution. In addition, the results indicate that Darcy’s permeability is an appropriate model for the SAS. This study can be used as a basis to further investigate the transduction of mechanical and hydrodynamic forces through the SAS.

REFERENCES 1) Al-Bsharat AS, et.al (1999). Intracrania pressure in the human head due to frontal impact based on finite element model. Proc. Bioneering Conference, BED ASME, 42:113-114. 2) A. Sabet et al., (2005) Deformation of the human brain induced by mild acceleration, J Neurotrauma, 22(8): 845–856 3) Bayly, et al. (2005), Deformation of the human brain induced by Mild acceleration J.Biomechanics; 41:307-315 4) Drew et al., (2004)The Contrecoup–Coup Phenomen. Neurocritical Care 3: 385–390 5) Guo et al., (2000) A hydrodynamic mechanosensory hypothesis for brush border microvilli. Am J Phy. Renal Physiol 279(4):F698-712. 6) Killer HE, et al. (2003a) Architecture of arachnoid trabeculae, pillars, and septa in the subarachnoid space of the human optic nerve, Br J Ophthalmol; 87:777–81. 7) Kleiven S and Hardy WN (2002). Correlation of an FE Model of the Human Head with Local Brain Motion - Cosequences for Injury Prediction. Stapp Car Crash Conference, 46:2002-22-0007. 8) H. E. Killer,et .al. (2006). The optic nerve: a new window into cerebrospinal fluid composition 9) P .Saboori, Sadegh, (2009) A. Effect of Mechanical Properties of SAS Trabeculae in transferring Loads to the Brain (IMECE2009) 10) P.Saboori, A. Sadegh (2010) Histology and Trabecular architecture of the brain for modeling of Subarachnoid Space ASME/SBC 11) P.Saboori, A. Sadegh (2010) Modeling of Subarachnoid Space and Trabeculae Architecture as it Relates to TBI. 16th US Nat. Cong. Theor. & App. Mech. #887 12) Rao V, Lyketsos C. Neuropsychiatric Sequelae of Traumatic Brain Injury. Psychosomatics 41 (2): 95– 103, 2000 13) Ruan JS, Khalil TB, King AI (1993). Finite Element Modeling of Direct Head Impact. Stapp Car Crash Conference, 37:933114. 14) G. Truskey, et al. (2004) Transport phenomena in Biological system 15) Weinbaum et al., (2002) Mechanotransduction and flow across the endothelial glycocalyx. PNAS 100(13): 7988-7995 16) Xin Jin, et al. (2008) Biomech. Response of the Bovine Pia-Arachnoid Complex to Tensile Loading atVarying Strain-Rates. Stapp 50: 637- 649 17) Zhang L, Yang KH, King AI (2001b). Biomechanics of Neurotrauma. J Neurotrama 23: 144-156 18) Zhang et al (2001c). Comparison of Brain Responses Between Frontal and Lateral Impacts by Finite Element Modeling. Jl of Neurotrauma, 18: 21-30 19) Zhang L, et al. (2002) Recent Advances in Brain Injury Research. Stapp Car Crash Conference, 45: 2001

IFMBE Proceedings Vol. 32

Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries M. Zoghi-Moghadam, P. Saboori, and A. Sadegh Department of Mechanical Engineering, City College of New York, New York, NY Abstract— Aneurysms of the cerebral arteries have potential risk of rupture which leads to intracranial hemorrhage that could be fatal. Nearly 1 to 5 percent of population has cerebral aneurysm; however 50 to 80 percent of these cases never face rupture of cerebral arteries. There is a high mortality rate of 45 percent in case of rupture. Many people with cerebral aneurysms are unaware. It is plausible to assume that arteries with aneurysm are more susceptible to injury in case of high stress and strain created in contact or non-contact head acceleration (blunt impacts). The purpose of this study is to investigate the response of cerebral arteries with and without aneurysm under external mechanical loadings. A twodimensional (2D) finite element model of head in sagittal plane has been created. The most common site of cerebral aneurysm is at circle of Willis, more specifically anterior communication artery (ACoA). A portion of ACoA with and without aneurysm has been created in the 2D model. The model was analyzed under dynamic loading. The stress and strain fields were investigated. It was concluded that the strain field is higher in aneurysm case comparing to normal case. It was also observed that the presence of aneurysm has influence on the surrounding brain media. Keywords— Aneurysm, Finite Element Analysis, Head Injury, Modeling.

I. INTRODUCTION An aneurysm is a focal dilatation of the vessel wall. Most are spherical in shape (saccular aneurysms) but they can also be fusiform. Aneurysms of the cerebral arteries can have severe consequences. Autopsy studies have estimated that the prevalence of cerebral aneurysms in the adult population is between 1 to 5 percent. These aneurysms become clinically relevant when they rupture and cause severe intracranial hemorrhage that could be fatal. Patients with ruptured aneurysms present with a sudden onset of severe headache, stiff neck and meningeal irritation due to subarachnoid hemorrhage. Studies have shown that 50 to 80 percent of all aneurysms do not rupture, thus these patients do not know that they are at risk [1]. However, if rupture does occur its mortality rate is 45 percent within the first month post rupture [2,3]. Currently, there is no sensitive, cost effective method for detecting cerebral aneurysms in the early stages. Risk factors include hypertension,

smoking, heavy alcohol consumption, family history and head injury. Understanding the difference in response to mechanical loading of a normal artery versus an artery with aneurysm can be helpful for preventing this fatal consequence in people at high risk. Traumatic brain injury (TBI), which is highly prevalent in the adult population, needs to be particularly prevented in these high risk individuals. Nearly 1.5 million people in US suffer from TBI annually [4]. The major causes of TBI are crashes involving motor vehicles, bicycles, pedestrians, firearm use, contact sports and falls. Researchers have studied cerebral aneurysm from different viewpoints. Li and Robertson (2009) developed a structural multi-mechanism damage model. They modeled cerebral arteries as an incompressible fiber-reinforced composite. The reinforcement was provided by a helical network of collagen fibers. Their model was validated against analytical data [5]. Eriksson et al. (2009) developed a growth model of saccular cerebral aneurysms using a twolayer cylinder (media and adventitia). They generated a damage region in the media and studied the stress distribution according to fiber angle [2]. Watton et al (2009) studied the change in hemodynamic environment on formation of cerebral aneurysm. They showed that the initiation of aneurysm is related to high wall shear stress (WSS) and wall shear stress gradient (WSSG) [3]. This has been confirmed by Li and Roberts (2009), Marcelo et al. (2006), Meng el al. (2006, 2007) and Metaxa et al. (2009) [5-9]. Many people with cerebral aneurysms are unaware. This is because they have never experienced a rupture thus there are no clinical signs or symptoms. The significance of this is currently unknown. It is plausible to assume that arteries with aneurysms are weaker and thus more susceptible to high stress and strain concentrations, i.e. during contact and noncontact acceleration of the head (blunt impacts). If this is the case then there may be a reason to screen people with a high relative risk based on known risk factors. This may allow these people to be monitored more carefully in order to prevent a catastrophic event. Furthermore, understanding how the presence of an aneurysm affects stress and strain concentrations could lead to a better assessment of a patient’s risk for rupture. This may allow for preventative interventions for people with a significant risk of rupture.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 444–447, 2010. www.springerlink.com

Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries

445

The purpose of this study is to investigate the response of cerebral arteries with and without aneurysm under external mechanical dynamic loadings. The most common sites for cerebral aneurysm are the arteries that make up the Circle of Willis (about 75%). To be more specific, the statistical data shows the occurrence rate of the followings: in anterior communicating artery, 30%, posterior communicating artery, 25% and middle cerebral artery, 20% [1]. A simplified two-dimensional finite element model of circle of Willis within brain media is created. The model is subjected to external loading in terms of dynamic displacement boundary condition. The focus of the study is on anterior communicating artery (ACoA). Its strain and stress fields due to external loading are determined for two different cases from normal arteries to severe aneurysmal arteries.

II. METHODS A. Modeling The model is a two-dimensional (2D) sagittal plane model of the head. It consists of scalp, skull, dura mater, arachnoid layer, pia mater and brain. The geometry of the model is taken from human magnetic resonance imaging (MRI). The detailed description of the model including the geometry information, mesh generation and material properties can be found in Saboori and Sadegh (2009)[10]. To model aneurysm, a portion of ACoA which is a component of circle of Willis was created in the 2D model. Figure 1 shows the 2D model including all its components. ACoA has been modeled as a single layered cylinder compromising adventitia and media layers. The material properties are taken from Watton et al (2009) [3]. To account for the presence of blood within the artery solid material with a low shear modulus and high Poisson’s ratio was placed inside the arterial walls. There are two models for ACoA, one representing normal healthy artery and the other one ACoA with saccular aneurysm. The geometry of aneurysm is taken from Brisman et al (2006). Figure 2 shows a magnified view of ACoA with aneurysm. The wall thickness of the blowup region (0.25 mm) is less than the healthy portions of ACoA (0.375 mm). The reason is the fact that the arterial wall has been degenerated from the inside thereby aneurysm has been produced. ABAQUS/CAE 6.9-2 [11] was used for pre-processing, analysis and post-processing. The model was subjected to dynamic blunt impact in form of velocity it took 90 minutes to run a 1sec analysis in 40 intervals with a 3.4GHz processor.

Fig. 1 2D model of head in sagittal plane. The meningeal layers are present in the model. In the middle there is a portion of ACoA in sagittal plane. The normal ACoA is shown here, there is another model with the same geometry having aneurysmal ACoA

Fig. 2 Aneurysmal artery, radius of the aneurysm region is 1 mm. The material properties of the blood media is a solid material with low shear modulus and very high Poisson’s ratio B. Results The nodal solution for stress and strain was investigated. Figure 3 shows the strain contour at the last time step for normal artery. As shown in this figure the strain is larger in the vicinity of the ACoA. This could be due to the different material properties of brain media and ACoA.

IFMBE Proceedings Vol. 32

446

M. Zoghi-Moghadam, P. Saboori, and A. Sadegh

(b)

Fig. 4 Stress contour (a) in normal case and (b) in aneurysmal case. The region of high stress is larger in (b) Fig. 3 Strain contour at t = 0.5 sec Figures 4a and 4b show stress and strain field of brain region for normal case and aneurysm case, respectively. As seen in Figure 4a the stress distribution is fairly smooth. The maximum stress occurs around the tips of the ACoA which could be due to stress concentration factor. However the stress field for the aneurysm case shown in Figure 4b has a relatively large region of high stress in the middle of the brain, comparing to normal case. This shows the impact of the aneurysm on the stress field of its neighboring components such as brain. Figures 5a and 5b show the stress and strain distribution in the aneurysmal arterial wall. As shown in Figure 5a the maximum stress takes place in lower left portion of the artery as well as in the aneurysm portion. Figure 5b almost follows the same pattern for strain, which is expected. Figures 5a and 5b suggest that an injury is likely to happen in the aneurysm portion.

(a)

(b)

Fig. 5 (a) Stress distribution and (b) strain

distribution in aneurymal ACoA

III. DISCUSSION AND CONCLUSIONS

(a)

The objective of this study is to compare the response of healthy normal cerebral arteries versus aneurysmal arteries to external mechanical loadings. These loading conditions may simulate the mechanical stresses endured during low impact injury. Nearly 30% of cerebral aneurysms occur in the ACoA, thus this artery was chosen for modeling in the current study. The IFMBE Proceedings Vol. 32

Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries

ACoA is part of the circle of Willis which is a dual blood supply that contains all the major vessels feeding the cerebrum. The circle of Willis receives all the blood that is pumped up the two internal carotid arteries which come up the front of the neck and that is pumped from the basilar artery formed by the union of the two vertebral arteries that come up the back of the neck. All the principal arteries that supply cerebral hemispheres of the brain branch off from the circle of Willis. Autopsies have shown that 1 to 5 percent of population possesses cerebral aneurysms, however these do not rupture for about 50 to 80 percent of cases. This means that many people with aneurysms are unaware of their risk for rupture and possibly a fatal subarachnoid hemorrhage. Studying the response to mechanical loading for healthy versus aneurysmal arteries can help us understand why some people are more susceptible to traumatic head injury. It may also help doctors to better assess the risk of rupture in people who have cerebral aneurysms. This could lead to preventative interventions. Two cases have been considered in this study, one is normal healthy ACoA and the other one a model of aneurysmal ACoA. The two models were analyzed under the same dynamic loading conditions. A summary of the results have been outlined in the previous section. In order to focus on the effect of aneurysm the strain field of arterial wall of the two models were compared. Figures 6 and 7 represent the average stress and the total strain with respect to normalized length of the artery for the two cases, respectively.

447

The results of average stress in brain media show areas of large stress in the middle of cerebrum for aneurysm case. This is intersting because aneurysmal artery has influenced the arterial walls as well as the surrounding media. In other words aneurysm could lead to injuries in the brain tissue itself, such as diffuse axonal injury (DAI).

Fig. 7 Comparison in the strain field of normal arteries and the ones with aneurysm Further studies are needed to further support or refute our conclusions. Multiple parameters are likely to play significant roles which govern the severity of head injury. Detailed studies of these parameters could advance the state of the art in understating the mechanism of injury and thereby minimizing the catastrophic impact of injuries.

REFERENCES

Fig. 6 Comparison in the stress field of normal arteries and the ones with aneurysm

As shown in Figure 6 the maximum stress within the aneurysm is nearly 100% greater than the normal healthy artery. Similarly Figure 7 shows that the maximum strain in the region of the aneurysm is nearly 50% greater than in the normal artery. These results support the conclusion that low impact external loading is more likely to damage a cerebral artery with an aneurysm compared to a normal healthy artery. This supports the hypothesis that people with cerebral aneurysms are more likely to experience head injury for any given initial load.

1. Brisman J, Song J, Newell D (2006) Cerebral Aneurysms. N Engl J Med 355:928-39 2. Eriksson T, Kroon M, Holzapfel G (2009) Influence of Medial Collagen Organization and Axial In Situ Stretch on Saccular Cerebral Aneurysm Growth. J Biomech Eng 131:101010-1 – 101010-7 3. Watton P et al (2009) Coupling the Hemodynamic Environment to the Evolution of Cerebral Aneurysms: Computational Framework and Numerical Examples. J Biomech Eng 131:101003-1 - 14 4. www.cdc.gov 5. Li D, Robetson A (2009) A Structural Multi-Mechanism Damage Model for Cerebral Arterial Tissue. J Biomech Eng 131:101013-1-8 6. Marcelo C et al (2006) Patient-Specific Computational Modeling of Cerebral Aneurysms. Acad Radiol 13:811-21 7. Meng H, Swartz D, Wang Z et al (2006) A Model System for Mapping Vascular Responses to Complex Hemodynamic at Arterial Bifurcation In Vivo. Neurosurgery 59(5):1094-1101 8. Meng H, Wang Z, Hoi Y et al (2007) Complex Hemodynamic at the Apex of an Arterial Bifurcation Induced Vascular Remodeling Resembling Cerebral Aneurysm Initiation. Stroke 38:1924-1931 9. Metaxa E, Tremmel M, Xiang J et al (2009) High Wall Shear Stress and Positive Wall Shear Stress Gradient Trigger the Initiation of Intracranial Aneurysm. SBD 2009, Lake Tahoe, CA 10. Saboori P, Sadegh, A (2009) Effect of Mechanical Properties of SAS Trabeculae in Transferring Loads to the Brain. IMECE 2009 11. ABAQUS, Inc. Pawtucket, R

IFMBE Proceedings Vol. 32

Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach C.D. Untaroiu1, Q. Zhang1, A.M. Damon1, J.R. Crandall1, K. Darvish2, G. Paskoff 3, and B.S. Shender3 1

Center for Applied Biomechanics/Department of Mechanical & Aerospace Engineering, University of Virginia, Charlottesville, VA, USA 2 Department of Mechanical Engineering, Temple University, Philadelphia, PA, USA 3 Naval Air Warfare Center Aircraft Division, Patuxent River, MD, USA

Abstract— Brain injuries have severe consequences and can be life-threatening. Computational models of the brain with accurate geometries and material properties may help in the development of injury countermeasures. The mechanical properties of brain under various loadings have been reported in many studies in the literature over the past 60 years. Stepand-hold tests under simple loading conditions have often been used to characterize viscoelastic and nonlinear behavior of brain under high-rate deformation; however, the stress relaxation curves used for material identification of brain are typically obtained by neglecting the initial strain ramp and by assuming a uniform strain distribution in the sample. Using finite element simulations of human brain shear tests, this study shows that these simplifications may have a significant effect on the measured material properties. Models optimized using only the stress relaxation curve predict much lower stress during the strain ramp due to an inaccurate elastic function. In addition, material models optimized using analytical models, which assume a uniform strain distribution, underpredict peak forces in finite element simulations. Models optimized using finite element simulations show similar relaxation behavior as the optimized analytical model, but predict a stiffer elastic response (about 46%). Identification of brain material properties using finite element optimization techniques is recommended in future studies. Keywords— Human Brain, Material properties, Linear Viscoelastic, Quasi-Linear Viscoelastic, Finite Element Method.

I. INTRODUCTION Brain injuries have severe consequences and can be lifethreatening. The continuous improvement of computational models of brain and optimization techniques may help in the development of injury countermeasures; however, valid brain numerical models require accurate material models under a variety of loading conditions. The mechanical properties of brain under various loadings have been reported in many studies in the literature over the past 60 years. Step-and-hold tests under simple loading are often used to characterize viscoelastic and nonlinear behavior of brain under high-rate deformation [1]. However, the stress relaxation curves used for material

identification of brain are typically obtained under two assumptions. First, by neglecting the initial strain ramp, a perfect step loading is assumed. Second, a uniform strain distribution is assumed, and the parameters of a onedimensional (1D) analytical material model are identified using optimization techniques. In an effort to better understand the mechanical response of human brain, the effects of the aforementioned two assumptions on the material properties are quantitatively investigated in this study using a finite element (FE) approach, which considers the three-dimensional (3D) deformation of the sample.

II. MATERIALS AND METHODS A. Testing Test data were taken from simple shear tests of seven cylindrical human samples collected by Takhounts [1]. The material properties of the data were analyzed and the results are reported in this study. Fresh human brain samples (approximately 12 mm height and 18 mm diameter) of primarily white matter (more than 85% according to histological analysis of samples) were obtained within 24 hours of death and were kept moist and refrigerated during the next 24 hours. The samples were attached to the plates of a custom– made testing device using methyl-2-cyanoacrylate adhesive (Super Glue Corporation, Rancho Cucamonga, CA) [1]. A programmable linear actuator attached to the lower plate was used to apply a linear displacement to the brain sample corresponding to 50% engineering shear strain in about 0.1 sec. Two force transducers (Sensotec Inc., Columbus, Ohio, Model 31/1435-03-04 and Model 31/1434-01-01) attached to the upper plate were used to record the shear and tensile force during the testing. B. Material Identification – Analytical Approach It is well known that brain tissue exhibits time-dependent stress-strain behavior [1-2], so it is a viscoelastic material. An isotropic linear viscoelastic (LV) material is the simplest

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 448–451, 2010. www.springerlink.com

Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach

constitutive formulation used to model the brain. According to this formulation if the material is subjected to a perfect step loading:

ε (t ) = ε 0 H (t − τ )

(1)

where H is Heaviside step function, then the stress is given by: (2) σ (t ) = ε 0G (t ) where G(t) is the stress relaxation function which is often approximated as a sum of exponentials 3

G (t ) = G ∞ + ∑ Gi

− βit

(3)

i =1

Applying the Boltzman superposition principle, the stress time history has the following integral representation: t ∂σ (ε ) (4) σ (t ) = ∫ G (t − τ ) dτ ∂τ 0 Since the ramp time was about 0.1 sec and the total duration of the ramp-and-hold test was about 4.5 sec, three decay rates with different orders of magnitude were chosen

β i = 10i [sec-1] i = 0,2

(5)

With the relaxation function (3), the stress at time t + Δt can be written as: t + Δt ∂σ (ε ) dτ = σ (t + Δt ) = ∫ G (t + Δt − τ ) ∂τ (6) 0 t

∫ G (t + Δt − τ ) 0

∂σ (ε ) dτ + ∂τ

t + Δt

∫ G (t + Δt − τ ) t

∂σ (ε ) dτ ∂τ

After calculation of both terms of Eq. 6, the formula of stress at time t + Δt can be written as: t ∂σ (ε ) dτ + σ (t + Δt ) = G∞ ε (t + Δt ) + ∑ e − β Δt ∫ Gi e − β (t −τ ) ∂τ (7) i =1 0 i

ε (t + Δt ) − ε (t ) Δt

∑ (1 − e i =1

− β i Δt

i

Gi i

So, if the coefficients of the relaxation function ( Gi , β i ) are known, the stress at time t + Δt can be calculated based on the stress at time t and the strain at both time steps t and t + Δt . The sum of squared errors (SSE) between the numerically calculated stress and the stress calculated from shear tests at about 200 equidistant points on the logarithmic time scale were considered as the objective function. While the values of β i were assumed (see Eq. 5), the shear coefficients

was also used to model the brain. This 1D mathematical model assumes that the relaxation function is split into two components: one a function of time and the other a function of strain. These components are multiplied [3] as: (8) G (t ) = Gr (t )σ e (t )

where σ e (ε ) is the instantaneous elastic response, which was assumed to be an odd polynomial function independent of the loading direction. 2

σ e (ε ) = ∑ C2i +1ε 2i +1 ;

(9)

i =0

A discrete spectrum was assumed for the normalized relaxation function Gr (t ) : 2

2

i =0

i =0

Gr (t ) = Gr ,∞ + ∑ Gr , 2i +1e −βit ; Gr ,∞ + ∑ Gr , 2i +1 = 1 ;

(10)

Then, according to the Boltzmann hereditary integral formulation, the stress σ (t ) is described as: t

σ (t ) = ∫ Gr (t − τ ) 0

∂σ e (ε ) ∂ε dτ ∂ε ∂τ

(11)

As in case of the linear viscoelastic model, the decay rates of relaxations β i were assumed (Eq. 5) and the values of the reduced shear coefficients response

Gr , 2i +1 and the instantaneous

C 2i +1 (7 optimization variables) were obtained by

minimizing the sum of squared errors (SSE) between the model and experimental stress. The parameters of both the LV and QLV models were identified under two conditions: one which neglects the loading curve assuming a perfect step shear loading [1], and another which considers the whole loading curve. C. Material Identification – Finite Element Approach One of the seven human brain tests (test 17) analyzed in this study was simulated numerically using LS-DYNA nonlinear FE software (LSTC, Livermore, CA). A parametric mesh of the cylindrical brain sample was developed in TrueGrid (XYZ Scientific Applications, Inc, Livermore, CA). a)

Brain sample

b)

Rigid plate

Gi (4 optimization variables) were determined by minimizing the SSE using a quasi-Newton algorithm implemented in the Excel Solver package (Microsoft, Redmond, WA). A quasi-linear viscoelastic (QLV) formulation, frequently applied to characterize soft tissues under large deformation,

449

Fig. 1 The FE model of brain samples a) undeformed state b) deformed state (50% eng. Shear strain)

IFMBE Proceedings Vol. 32

450

C.D. Untaroiu et al.

The linear viscoelastic material model optimized using the analytical model (1D) was assigned to the brain model. The bulk modulus of brain was considered to be similar to that of water with a value of 2.1 GPa [4]. While the model nodes of the downward face were fully constrained to the rigid plate, the model nodes of the upward face were constrained only in the z and y directions. Displacement in the x direction was prescribed based on the displacement time history measured during testing. The time histories of shear force were calculated as the sum of nodal forces of the downward face along the x-direction. The shear force time history predicted by the FE model using the analytical material model showed similar relaxation behavior but lower peak stresses during the loading ramp. Therefore, the parameters of the FE material model were optimized by multiplying the shear coefficients by a constant and minimizing the SSE between the shear force recorded in testing and the corresponding force calculated in FE simulations.

III. RESULTS AND DISCUSSION

a)

b)

Fig. 2

8

Stress(kPa)

6

QLV LV

4

2

0 0%

10%

20%

30%

40%

50%

60%

ShearEngineeringStrain(%)

1.2

ReducedRelaxationFunction

The time histories of the shear stress (unfiltered) showed little noise during loading and relaxation, but inherent oscillatory mechanical noise occurred at the beginning and end of the ramp phase (Fig. 2). These oscillations, caused by inertial effects, were eliminated in the analytical models by minimizing the SSE of the model fit. Both the LV and QLV constitutive models that were optimized considering the loading ramp fit the data well (Fig. 2); however, the QLV model showed a better fit than the LV model given the higher number of coefficients used in the optimization (7 parameters in QLV compared to 4 parameters in LV). As expected, the models optimized by neglecting the loading ramp were able to characterize the relaxation response but under-predicted the peak stresses during the loading phase (Fig. 2). The average shear instantaneous response of the LV model was stiffer than the QLV model at low strain levels, and the opposite behavior was true at higher strain levels (Fig. 3a). In addition, the QLV model showed a faster relaxation response than the LV model (Fig. 3b). While the differences between the relaxation curves are mainly caused by the different assumed shapes of the instantaneous elastic response (linear vs. 5th degree polynomials), collecting additional test data from human tissue samples would improve the robustness of this methodology. For example, it would be beneficial to include the time histories of tensile force and record longer hold periods to better approximate long time relaxation behavior.

Comparison of test data (Test 17) using the analytical model optimized using the whole loading curve and the analytical model optimized using only the relaxation curve a) linear viscoelastic (LV) model, and b) quasi-linear viscoelastic (QLV) model

QLV(average)

1

LV(average) Test17(QLV)

0.8

Test17(LV) 0.6 0.4 0.2 0 0.0001

0.001

0.01

0.1

1

10

Time(s)

Fig. 3 Comparison of LV and QLV models a) Instantaneous elastic response b) Reduced relaxation function The FE model of the brain sample (Test17) with the shear material properties from the analytical model predict

IFMBE Proceedings Vol. 32

Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach

much lower forces than the experimental data (Fig 4). FE optimization of the linear viscoelastic model showed that it fits very well with the experimental data; however, the elastic coefficient obtained from FE optimization is around 46% percent stiffer than the analytical model.

451

The elements with strain close to the assumed analytical value (between 45% and 55% strain) are located in the center of specimen and represent only 41% of the total elements. Approximately 2% of the elements recorded shear strains above 55%. These results suggest that the shape of the specimen may have a significant influence on the shear coefficients values obtained using an analytical approach. Therefore, additional numerical studies are recommended in order to determine the dimensional characteristics of samples (or correction factors) which would reasonably satisfy the uniform strain distribution. Another alternative, especially for analyzing the tests already performed, would be to identify the model parameters using FE simulations [5].

IV. CONCLUSIONS Fig. 4

Comparison between the shear force predictions of the linear viscoelastic models: optimized analytical model (1D), FE model with the material parameters of the analytical model, and the optimized FE model

The inconsistencies between the results of the 1D and 3D models can be explained by the assumption of uniform distributed shear strain within the sample used by the 1D model. This assumption is rejected by the results obtained in the FE simulation as observed in Fig. 5. At maximum displacement of the sample, the shear engineering strain shows a nonuniform distribution ranging from almost 0 % to 75% strain within the sample. The lower force predicted by the analytical model can be explained by the high percentage of elements with shear strains at lower levels than assumed by the analytical approach. For example, at the max shear displacement (50% strain), 57% (876 elements) of the total number of elements (1420 elements) show shear strains under 45% (analytical approach assumed 50% for all elements). 75% 62% 50% 37% 25% 12% 0%

a)

b)

c)

The material properties of the human brain under large shear deformation were investigated in this study. The models optimized using only the relaxation curve predict much lower stresses during loading due to an inaccurate elastic function. In addition, the material models optimized using an analytical model which assumes a uniform strain distribution, predict lower forces in finite element simulations. Finite element optimization appears to be a promising tool for the identification of brain material properties by considering the entire loading time histories and non-uniform strain distribution within the sample.

ACKNOWLEDGMENT This research was funded under Naval Air Warfare Center Aircraft Division contract N00421-06-C-0048.

REFERENCES [1] Takhounts E, (1998), Experimental determination of constitutive equations for human and bovine brain tissue, Ph.D. thesis, UVA [2] Darvish K et al, (2001), Nonlinear Viscoelastic Effects in Oscillatory Shear Deformation of Brain Tissue. J Med Eng Ph 23: 633-645. [3] Fung YC, (1993) Biomechanics: Mechanical properties of living tissues. Springer, New York. [4] Zhang L et al. (2001) Comparison of Brain Responses Between Frontal and Lateral Impacts by FEM Journal of Neurotrauma,18(1): 21-30 [5] Untaroiu C et al. (2007), A Design Optimization Approach of Vehicle Hood for Pedestrian Protection, Int J Crash, 12(6):581-589 The address of the corresponding author:

Fig. 5

The distribution of shear strain in the sample at maximum displacement a) whole model (1420 elements) b) elements with shear strain less than 45% (876 elements), c) elements with shear strain between 45% and 55% (630 elements)

Costin D. Untaroiu Center for Applied Biomechanics 1011 Linden Ave. Charlottesville, VA 22902, USA [emailprotected]

IFMBE Proceedings Vol. 32

Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-scale Investigations N.A. White1, C.S. Shah2, and W.N. Hardy1 1

Virginia Tech – Wake Forest University, Center for Injury Biomechanics, Blacksburg, USA 2 First Technology Safety Systems, Inc., CAE, Plymouth, USA

Abstract— Traumatic rupture of the aorta (TRA) occurs most often in high-speed automobile collisions. Although infrequent overall, TRA accounts for a disproportionately high percentage of crash-related fatalities. The etiology of TRA is reviewed along with novel experimental techniques for reproducing clinically relevant TRA in unembalmed cadaver tissue and whole cadavers. A multi-scale testing approach is used to study TRA, including biaxial tissue testing of aorta samples, longitudinal stretch testing of intact aorta specimens, in-situ testing of the aorta, and whole-body cadaver impact studies using a high-speed biplane x-ray system. It is shown that anterior, superior, or lateral distraction of the arch can generate a tear in the peri-isthmic region of the aorta. Axial elongation (longitudinal stretch) is fundamental to the initiation of TRA, with complete failure of the aorta in the peri-isthmic region beginning near strain on the order of 0.22. Additionally, deformation of the thorax is essential for TRA to occur. On the other hand, whole body acceleration and intraluminal pressure are not required to produce TRA. Pulmonary artery injury need not accompany TRA. While the ligamentum arteriosum may contribute to TRA, it is not required to produce injury. Atherosclerotic plaque is shown to increase the risk for TRA. Testing of perfused cadavers is used to elucidate potential mechanisms of TRA induced by automobile crashes. Three-dimensional motion of the aorta within the mediastinum and longitudinal strain in the peri-isthmic region are measured during frontal and lateral impacts using high-speed x-ray. Dorsocranial and left-side lateromedial deformation of the thorax can generate TRA in the cadaver. However, further investigation is needed to better understand these mechanisms. The use of finite element simulations has become a viable way of investigating the underlying mechanisms of TRA using real world scenarios and has potential to aid the design of future cadaver studies involving TRA. Once better understood, these injuries can be mitigated though advances in automotive safety systems. Keywords— TRA, aorta, mechanism, kinematics, cadaver.

I. INTRODUCTION Over 20,000 cases of TRA, with an 88.6% fatality rate, were reported for motor vehicle crashes (MVCs) between 1995 and 2000, and were associated primarily with frontal and near-side impacts (McGwin et al., 2003). The rate of TRA in near-side motor vehicle crashes is double the rate seen in frontal crashes (Steps 2003). In 2008, Bertrand et al. found that 21.4% of all MVC fatalities were attributed to

TRA. Clinical TRA almost always occurs in the transverse direction with tears occurring mainly in the intima and media of the aorta (Zehnder 1960, Strassmann 1947). In 94% of all TRA these tears are confined to the peri-isthmic region of the aorta (Katyal 1997). The ascending aorta extends superiorly and posteriorly from the left ventricle, then forming the aortic arch and continuing inferiorly along the left side of the vertebral column as the descending aorta. The left subclavian, left common carotid, and brachiocephalic trunk branch from the arch of the aorta, and the intercostal arteries branch from the descending thoracic aorta. The region of the aortic arch neighboring the ligamentum arteriosum is referred to as the isthmus and is of particular importance in TRA. The periisthmic region is bounded by the insertion of the left subclavian artery cranially and the junction of the arch and descending aorta caudally. The three layers that compose the aortic wall are the intima, media and adventitia. The innermost layer is the intima and is a layer of endothelial cells. Thickening of this layer with age has shown to affect the mechanical properties of the aorta (Clark & Glagov, 1979). The middle layer is referred to as the media and is composed of smooth muscle cells, elastic fibers, and collagen. The outer layer, which is composed mainly of collagen fibers and ground substance, is the adventitia.

II. EXPERIMENTAL STUDIES While TRA has been studied for more than a century, little was known about its mechanism of injury. Several theories to explain TRA have been proposed including downward traction (Letterer 1924), intravascular pressure (Klotz and Simpson 1932), deceleration (Zehnder 1960), “Water Hammer” (Lundevall 1964), and Voigt’s “Shoveling” (Voigt and Wilfert 1969). Recently, a multi-scale experimental approach was implemented to examine the mechanisms of TRA including biaxial tissue testing of aorta samples, longitudinal stretch testing of intact aorta specimens, in-situ testing of the aorta, and whole-body cadaver tests. A. Tissue-Level Testing (Biaxial) High-speed biaxial tissue properties of the aorta were first examined by Shah et al. (2005, 2006, 2007) using a

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 452–455, 2010. www.springerlink.com

Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-scale Investigations

B. Component-Level Testing (Longitudinal Stretch) To investigate the structural properties of the aorta, a series of component-level longitudinal stretch tests were conducted using seven intact specimens (Shah 2006, 2007). The unembalmed, intact aortas were excised from the root to the celiac trunk from five female and three male cadavers with an average age, stature and mass of 77 years, 164 cm and 69 kg, respectively. Each specimen was lined with dots on the external surface using ink and then clamped in a high-rate hydraulic loading device, constraining the aortic arch and descending aorta. The aortic arch clamp contained the attachment of the left subclavian artery, but not the ligamentum arteriosum. The non-perfused aorta was placed into tension until failure at a rate of 1 m/s. High-speed video was used to capture the displacement of the dots during each test. Longitudinal Lagrangian strain, maximum principal

strain, and strain rate were calculated in the area of tear initiation. Failure load and engineering stress at failure were reported also. Locations of the tears were reported with respect to the ligamentum arteriosum and the left subclavian artery. All tears occurred in the transverse direction within the peri-isthmic region (Figure 1b). On average, complete transection of the aorta can occur at 92±7-N axial load, 0.221±0.069 axial strain, 0.75±0.14-MPa engineering stress and 11.8±4.6-s-1 maximum principal strain rate. Tear Initiation Longitudinal

custom designed biaxial testing device (Mason 2005). Through the use of a carriage system ridding on linear shafts and miniature clamps, cruciate-shaped tissue samples were subjected to simultaneous equal stretch in four directions at roughly 1 and 5 m/s. A template was used to stamp the cruciate samples oriented 22.5 degrees from the longitudinal axis of the aorta in Shah et al. (2005) and collinearly with the longitudinal axis in Shah et al. (2006, 2007). The former method was used to compare results with those of Lanir et al. (1996). These results were later transformed to longitudinal and circumferential directions for comparison with Shah et al. (2006, 2007). Shah et al. (2006, 2007) used aortas from twelve unembalmed, frozen and then thawed cadavers. The tissue samples were harvested from the ascending, peri-isthmic, and descending aorta of three female and nine male cadavers, with an average age, stature and mass of 68 years, 172 cm and 82 kg, respectively. Ink was used to mark each specimen with an array of dots in a regular pattern. High-speed video was used to capture the displacement of the dots during each test, allowing for strain time histories to be calculated. Two lasers were used to measure the change in thickness of each specimen during testing, with results confirming the incompressible nature of the tissue. This tissue incompressibility assumption was exercised for several tests where the laser data was unusable. Miniature load cells attached to each tissue clamp, along with accelerometers, were used to calculate inertia–compensated, load-time histories. Overall average maximum principal strain rate, and longitudinal Lagrangian failure stress and strain were 84.97±48.07-s-1, 1.96±0.97-MPa and 0.244±0.100, respectively. Specimens failed in the transverse (circumferential) direction for each test (Figure 1a), with failure occurring first in the intima, and exhibited nonlinear, anisotropic mechanical properties with no apparent rate-effect.

453

Transverse Tear Transverse (a)

(b)

Fig. 1 (a) Aorta sample during transverse tear initiation. An array of 11 dots was tracked to calculate strain. (b) Complete transection of the aorta in the transverse direction within the peri-isthmic region C. In-Situ Component-Level Testing Hardy et al. (2006) performed a series of in-situ experiments on unembalmed cadavers, involving quasi-static controlled distortion of the heart and aortic arch until complete transection. Four cadavers, two female and two male, with average age, stature and mass of 59 years, 169 cm and 84 kg were each used in four in-situ tests. The specimens were subjected to open-chest, quasi-static tests. The chest wall was removed and the heart and aorta carefully exposed. Nylon webbing was passed around the spine just dorsal to the aortic arch and through the back of the cadaver where it was secured to the test fixture. A grid pattern was applied to the peri-isthmic region and part of the descending aorta to facilitate peak stretch estimation during the tests. The aorta of the first specimen was distracted manually without perfusion pressure in the anterior direction. The next three aortas were pressurized and pulled in tension via a ratcheting system fitted with a load cell. Nylon webbing was wrapped around the ascending aorta and ligamentum arteriosum of the second specimen and distracted anteriorly (Figure 2a). The third specimen was distracted laterally to the right with webbing wrapped around the ascending aorta, but not the ligamentum arteriosum. The fourth specimen was distracted superiorly with the webbing wrapped around the aortic arch. All four specimens failed transversely in the peri-isthmic region, close to the ligamentum arteriosum and the pleural attachment between the spine and aorta (Figure 2b). Minor

IFMBE Proceedings Vol. 32

454

N.A. White, C.S. Shah, and W.N. Hardy

lacerations in the transverse direction were noted along the intima in the vicinity of the primary tear. The presence of atherosclerotic plaque increased both the number of these lacerations and their distance from the primary tear. On average, the distance from the primary tear to the ligamentum arteriosum and left subclavian artery were 15 and 29 mm, respectively. The peak webbing load and percent stretch averaged 148 N and 30% respectively, illustrating that TRA can result from nominal levels of tension. Anterior Distraction

Partial Tear

PeriIsthmus

Heart

(a)

(b)

Fig. 2 (a) Manual anterior distraction of the ascending aorta. (b) Partial tear resulting from manual anterior distraction of the ascending aorta D. Whole-Body Testing In addition to the four quasi-static open-chest tests, Hardy et al. (2008) performed whole-body dynamic impact tests using high-speed x-ray. The peri-isthmic region and descending aorta were accessed through an axillary thoracotomy on the left side, between ribs 3 and 4. A series of 2mm diameter lead spheres were fixed to the adventitia of the freed section of aorta at regular intervals using black cyanoacrylate gel (Figure 3).

to approximate normal physiological conditions. Eight whole body cadavers, four male and four female, with average age, stature and mass of 70 years, 175 cm and 65 kg, were subjected to shoveling, side impact, submarining or combined impact conditions (Figure 4). All aortic tears, except one minor tear, occurred within the peri-isthmic region (Figure 5a). These tears occurred primarily in the circumferential direction and in the vicinity of the lesser curvature of the aortic arch. It was common to see tears in areas of increased atherosclerotic plaque. Multiple, bilateral rib fractures occurred in every test in addition to sternum fractures for the shoveling test and visceral damage to the abdominal organs in the submarining test. Average peak impact load, impact speed and intraluminal aortic pressure were 4.5 kN, 8.0 m/s and 67.5 kPa, respectively. The mediastinal motion of the aorta was determined from the highspeed x-ray motion tracking of the aorta targets (Figure 5b). The shoveling test produced dorsocranial mediastinal motion, moving the aorta posteriorly, superiorly and slightly left. The side impact tests produced anteromedial motion, moving the aorta anteriorly, laterally (right) and slightly superiorly (arm engaged) or slightly inferiorly (ribs engaged). The submarining test produced dorsocranial motion, moving the aorta superiorly, somewhat posteriorly and slightly lateral. The combined tests produced dorsocranial and medial motion, moving the aorta superiorly, posteriorly and laterally (left). Average longitudinal tensile strain time histories were calculated using marker displacements in terms of triads, (triangular combinations) using LS-DYNA (Livermore Software Technology Corporation, CA). Tension was the primary mode of loading for the longitudinal response of all tests.

III. CONCLUSIONS

Marker Placement (a)

(b)

Fig. 3 (a) Marker placement along the aorta of a whole-body specimen. (b) Single frame from the high-speed x-ray system displaying marker placement on the aorta

Webbing was wrapped around the spine at two levels to constrain the body, the arms allowed to dangle and the lower extremities amputated at the hip. To position the mediastinal contents in a more anatomical position, the cadaver was inverted at an angle. The aorta was pressurized

A multi-scale experimental approach has been implemented to study the mechanisms of TRA. Failure of the aorta will always occur in the transverse direction with the intima failing before the media or adventitia. As a complete structure, the aorta fails within the peri-isthmic region at roughly 30-percent stretch. While the material properties of the aorta are characterized by a nonlinear stress-strain response, more research is needed to determine rate effects. Simple tension of the aorta in-situ can generate clinically relevant TRA. Straightening of the inferior arch of the aorta though anterior, superior or right lateral distraction may initiate a tear. While intraluminal pressure and whole body acceleration is not required to produce TRA, thoracic deformation must occur. TRA can occur without injury to the pulmonary artery and without loading via the ligamentum arteriosum. However, an important aspect of TRA is the

IFMBE Proceedings Vol. 32

Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-scale Investigations

455

Pelvis

Pelvis L Arm Head Head (a)

Seatbelt (c)

(b)

Fig. 4 Experimental set-up for the (a) shoveling, (b) arm side impact and (c) submarining high-speed x-ray tests tethering of the descending thoracic aorta by the parietal pleura. TRA tends to occur within regions of plaque when atherosclerosis is present at longitudinal tensile strains below established failure thresholds for the aorta. While a better understanding of TRA mechanisms has been acquired though a multi-scale experimental approach, further testing is required to fully understand this deadly phenomenon. Sup 75

Transverse Tear

50

25

Ant

0 50

25

-25

-50

-25

(b)

(a)

Fig. 5 (a) Aortic tear from frontal shoveling impact. (b) Motion of the aortic markers from a frontal shoveling impact from a sagittal view (scale in mm)

ACKNOWLEDGMENT This work was conducted under the auspices of Wayne State University. The authors wish to thank the Bone and Joint Specialty Center of the Henry Ford Health System. The funding for this research has been provided [in part] by private parties, who have selected Dr. Kennerly Digges [and FHWA/NHTSA National Crash Analysis Center at The George Washington University] to be an independent solicitor of and funder for research in motor vehicle safety, and to be one of the peer reviewers for the research projects and reports. This research was supported in part by NHTSA through the Southern Consortium for Impact Biomechanics.

REFERENCES 1. Bertrand S, Cuny S, Petit et al. (2008) Traumatic rupture of the thoracic aorta in real-world motor vehicle crashes. Traffic Injury Prevention, 9:153-161 2. Clark J, Glagov S (1979) Structural integration of the arterial wall. I. Relationships and attachments of medial smooth muscle cells in normally distended and hyperdistended aortas. Lab Invest 40, 587-602 3. Hardy W, Mason M, Foster C et al. (2007) A study of the response of the human cadaver head to impact. Stapp Car Crash Journal, 51:17-80

4. Hardy W, Schneider L, Rouhana S (2001) Abdominal impact response to rigid-bar, seatbelt, and airbag loading. Stapp Car Crash Journal, 45:1-31 5. Hardy W, Shah C, Kopacz J et al. (2006) Study of potential mechanisms of traumatic rupture of the aorta using in situ experiments. Stapp Car Crash Journal, 50:247-265 6. Katyal D, Mclellan B, Brenneman F et al. (1997) Lateral impact motor vehicle collisions: Significant cause of blunt traumatic rupture of the thoracic aorta. Journal of Trauma, 42(5), 769-772 7. Klotz O, Simpson W (1932) Spontaneous rupture of the aorta. American Journal of Medical Science, 184, 455-473 8. Letterer E (1924) Beitrage zur entstehung der aortenruptur an typischer stele. Virch. Arch. Path. Anat, 253, 534-544 9. Lundevall J (1964) The mechanism of traumatic rupture of the aorta. Acta Path. Microbiol. Scand, 62, 34-46 10. Mason M, Shah C, Maddali M et al. (2005) A new device for highspeed biaxial tissue testing: Application to traumatic rupture of the aorta. Transactions of the Society of Automotive Engineers, Paper No. 2005-01-0741 11. McGwin G, Metzger J, Moran S et al. (2003) Occupant- and collision-related risk factors for blunt thoracic aorta injury. J. Trauma, 54, 655-662 12. Shah C (2007) Investigation of traumatic rupture of the aorta (TRA) by obtaining aorta material and failure properties and simulating realworld aortic injury crashes using the whole-body finite element (FE) human model. PhD Dissertation, Mechanical Engineering, Wayne State University, Detroit, Michigan 13. Shah C, Hardy W, Mason M et al. (2006) Dynamic biaxial tissue properties of the human cadaver aorta. Stapp Car Crash Journal, 50:217-245 14. Shah C, Mason M, Yang K et al. (2005) High-speed biaxial tissue properties of the human cadaver aorta. Proceedings of IMECE05, 2005 ASME International Mechanical Engineering Congress, IMECE2005-82085 15. Steps J (2003) Crash characteristics indicative of aortic injury in near side vehicle-to-vehicle crashes. Ph.D. Dissertation, The George Washington University 16. Strassmann G (1947) Traumatic rupture of the aorta. American Heart Journal, 33, 508-515 17. Voigt G, Wilfert K. (1969) Mechanisms of injuries to unrestrained drivers in head-on collisions. Proc. 13th Stapp Car Crash Conference, pp. 295-313 18. Zehnder M (1960) Accident mechanism and accident mechanics of the aortic rupture in the closed thorax trauma. Thoraxchirurgie und Vasculaere Chirurgie, 8, 47-65

Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Warren N. Hardy Virginia Tech 443 ICTAS Bldg, Stanger St., MC 0194 Blacksburg USA [emailprotected]

Head Impact Response: Pressure Analysis Simulation R.T. Cotton1, P.G. Young2, C.W. Pearce2, L. Beldie3, and B. Walker3 1

2

Technical Services, Simpleware, Exeter, UK School of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK 3 Vehicle Design, ARUP, Solihull, UK

Abstract— A new approach to generating physical and numerical models of the human head is presented. In this study, analytical, numerical and experimental models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. The aim of the study is to investigate whether it is possible to predict the response of the head for a particular impact scenario using image-based modeling techniques. A number of finite element models were generated based on MRI scan data. The models were generated using a technique adapted from the marching cubes approach which automates the generation of meshes based on 3D scan data, and allows for a number of different structures (e.g. skull, scalp, brain) to be meshed simultaneously. The resultant mesh was used to explore the intra-cranial response to impact. Previously developed approximate analytical expressions were also used to provide additional comparison results. Good agreement was observed between these modeling techniques, and large transient pressure amplification at the site of impact was observed for impacts of low duration. In this paper, the analytical and numerical models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. The presented research demonstrates the potential of the approach for the generation of head impact models based on in vivo clinical scans. Beyond its significance in the area of head impact biomechanics, the study has demonstrated that numerical models generated from 3D medical data can be used effectively to simulate physical processes. This is particularly useful when considering the risks, difficulties and ethical issues involved when using cadavers. Keywords— image-based meshing, patient-specific modeling, finite element, pressure response, head impact.

I. INTRODUCTION Although a wide range of mesh generation techniques are currently available these, on the whole, have not been developed with meshing from segmented 3D imaging data in mind. Meshing from 3D imaging data presents a number of challenges but also unique opportunities for presenting more realistic and accurate geometrical description of the computational domain. The majority of approaches adopted

have involved generating a surface model (either in a discretized or continuous format) from the scan data, which is then exported to a commercial mesher – a process which is time consuming, not very robust and virtually intractable for the complex topologies typical of image data. A more ‘direct approach’ presented in this paper is to combine the geometric detection and mesh creation stages in one process which offers a more robust and accurate result than meshing from surface data.

II. MESH GENERATION FROM BIOMECIAL IMAGING DATA: CAD VERSUS IMAGE-BASED MESHING

Meshing from image data presents a number of challenges but also unique opportunities so that a conceptually different approach can provide, in many instances, better results than traditional approaches. Image-based mesh generation raises a number of issues which are different from CAD-based model generation. CAD-based approaches use the scan data to define the surface of the domain and then create elements within this defined boundary [1]. Although reasonably robust algorithms are now available [2], these techniques do not easily allow for more than one domain to be meshed, as multiple surfaces are often non-conforming with gaps or overlaps at interfaces where one or more structures meet. A more direct approach developed by the authors combines the geometric detection and mesh creation stages in one process. The technique generates 3D hexahedral or tetrahedral elements throughout the volume of the domain [3], thus creating a robust and accurate mesh directly with conforming multipart surfaces. This technique has been implemented as a set of computer codes (ScanIP, +ScanFE and +ScanCAD).

III. PRESSURE RESPONSE ANALYSIS IN HEAD INJURY In this study, analytical, numerical and experimental models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. The aim of the study is to investigate whether it is possible

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 456–458, 2010. www.springerlink.com

Head Impact Response: Pressure Analysis Simulation

457

to predict the response of the head for a particular impact scenario using these image-based modeling techniques.

transient is observed under the site of impact which is followed by a negative pressure transient and then a high positive pressure transient as shown in Fig. 2.

A. Methods and Model Generation High resolution T1-weighted whole head MRI scans of a young male were obtained in vivo. Using ScanIP software (Simpleware Ltd.), 15 different structures were segmented, i.e. brain (gray matter, white matter, brain stem, cerebellum); CSF, skull, mandible, cervical vertebrae, intervertebral discs eye (eye-ball, optic nerve, fatty tissue), nasal passage, and skin (cf. Fig. 1).

Fig. 2 Von Mises stress at the brain in LS-DYNA® (LSTC)

a)

b)

Fig. 1 Segmented head model in a) ScanIP (Simpleware) and b) LS-DYNA® (LSTC) A number of finite element models were generated in ScanFE (Simpleware Ltd.) based on the segmented image data. The resultant mesh was exported to LS-DYNA® (LSTC - Livermore Software Technology, Corp.). In addition, the interface between the skin and exterior was used to define a contact surface. The various components are connected by coincidental nodes and elements. The exterior surface of the skin was used to define a contact surface. An impactor was introduced in LS-DYNA® with a velocity of 7m/s, a mass of 6.8kg, and a duration of event of 15ms. The brain region was set to be a viscoelastic material, the CSF was set to be an elastic fluid, and everything else was am elastic material.

Good agreement was observed between these modeling techniques, and large transient pressure amplification at the site of impact was observed for impacts of low duration. In this paper, the analytical and numerical models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. Von Mises stresses in the intervertebral discs and cervical vertebrae were also investigated (cf. Fig. 3).

+

B. Results and Discussion The resulting models are geometrically very accurate, and were used to explore the intra-cranial response to impact. Previously developed approximate analytical expressions were also used to provide additional comparison results [4]. The finite element models generated were solved using LS-DYNA®. At early stages after contact a high pressure

a)

b)

Fig. 3 Von Mises stress a) at the discs and b) in the vertebrae in LS-DYNA® (LSTC) In addition, a model of the head wearing a helmet was generated (cf. Fig. 4). For the head and helmet model, + ScanCAD (Simpleware Ltd.) was used to import STEP data of the helmet, and interactively position it on the head. The resultant mesh was again exported to LS-DYNA®, including a contact surface on the outside of the helmet. show the influence of the presence of a helmet to reduce the pressure transient.

IFMBE Proceedings Vol. 32

458

R.T. Cotton et al.

in vivo clinical scans. Beyond its significance in the area of head impact biomechanics, the study has demonstrated that numerical models generated from 3D medical data can be used effectively to simulate physical processes. This is particularly useful when considering the risks, difficulties and ethical issues involved when using cadavers. It has been shown how integrating CAD data in to the image data can be used to investigate different helmet designs with a realistic head model.

REFERENCES Fig. 4 Head and helmet model in +ScanCAD (Simpleware)

IV. CONCLUSIONS The ability to automatically convert any 3D image dataset into high quality meshes, is becoming the new modus operandi for anatomical analysis. Techniques have been developed for the automatic generation of volumetric meshes from 3D image data including image datasets of complex structures composed of two or more distinct domains and including complex interfacial mechanics. The techniques guarantee the generation of robust, low distortion meshes from 3D data sets for use in finite element analysis (FEA), computer aided design (CAD) and rapid prototyping (RP). Additional tools enable the incorporation of CAD models interactively within the image. The presented research demonstrates the potential of the approach for the generation of head impact models based on

Cebral J, Loehner R (2001) From medical images to anatomically accurate finite element grids. Int.J.Num.Methods Eng., 51:985-1008. Antiga L, Ene-Iordache B, et al. (2002) Geometric reconstruction for computational mesh generation of arterial bifurcations from ct angiography. Computerized Medical Imaging and Graphics, 26:227-235. Young P, Beresford-West T, et al. (2008) An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions of the Royal Society A, 366:3155-3173. Johnson E.A.C., Young P.G. (2005) On the use of a patient-specific rapidprototyped model to simulate the response of the human head to impact and comparison with analytical and finite element models. J Biomech, 38:39-45.

Author address: Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Ross Cotton Simpleware Ltd. Rennes Drive Exeter United Kingdom [emailprotected]

An Introduction to the Next Generation of Radiology in the Web 2.0 World A. Moein1, M. Malekmohammadi2, and K. Youssefi3 1

Department of Biomedical Engineering, Azad University, Science and Research Branch, Tehran, Iran 2 Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran 3 Mississippi State University, MS, USA

Abstract— "Web 2.0" refers to a second generation of web development and design, that facilitates communication, secure information sharing, interoperability, and collaboration on the World Wide Web. The truth is that Web 2.0 is a difficult term to define, even for web experts. Usually phrases like “the web as platform” and “architecture of participation” are used to describe this term. Examples of Web 2.0 include webbased communities, hosted services, web applications, socialnetworking sites, video-sharing sites, wikis, blogs, mashups and folksonomies. The Internet is changing medicine and Web 2.0 is the current buzz word in the World Wide Web dictionary. Radiology in the Web 2.0 probably refers to things like globalization, clinical decision supporting softwares, social networking sites dedicated to radiology and also radiology centric blogs and wikis etc. Also concepts like PACS, DICOM, RIS, Teleradiology, Web-based PACS, HL7, IHE, HIPAA, etc again somewhere refer to the impact of Web 2.0 on radiology and often known as Radiology 2.0. In this paper we are going to have an overview on recent development of radiology in the Web 2.0 world and also demonstrate our point of view about the radiology electronic learning in the future. Keywords— Web 2.0, Radiology e-Learning, Picture Archiving and Communication System (PACS), Digital Imaging and Communication in Medicine (DICOM), Teleradiology, Web-based PACS, Medical Imaging Informatics.

I. INTRODUCTION Web 2.0 generally refers to a set of social, architectural, and design patterns resulting in the mass migration of business to the Internet as a platform. These patterns focus on the interaction models between communities, people, computers, and software. Human interactions are an important aspect of software architecture and, even more specifically, of the set of websites and web-based applications built around a core set of design patterns that blend the human experience with technology [1]. Web 2.0 is about the spirit of sharing which is in contrast to the traditional concept of “knowledge is power”. Knowledge in the world of web 2.0 is about sharing and is nobody's property. Term that personally use for web 2.0 and medicine is democratization of knowledge. As of today, Web 2.0 is an important repository of medical knowledge

editable in real-time by physicians. It is in contrast to the static delivery of contents over the traditional internet hence the term Web 2.0 [2].

II. WEB 2.0 CONCEPTS A. Characteristics Web 2.0 websites allow users to do more than just retrieve information. They can build on the interactive facilities of "Web 1.0" to provide "network as platform" computing, allowing users to run software-applications entirely through a browser [3]. Users can own the data on a Web 2.0 site and exercise control over that data [3,4]. The characteristics of Web 2.0 are: rich user experience, user participation, dynamic content, metadata, web standards and scalability. Further characteristics, such as openness, freedom [5] and collective intelligence [3] by way of user participation, can also be viewed as essential attributes of Web 2.0. B. Technology Overview Web 2.0 draws together the capabilities of client-side and server-side software, content syndication and the use of network protocols. Standards-oriented web browsers may use plug-ins and software extensions to handle the content and the user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment now known as "Web 1.0" [6]. C. Usage The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to coin a flurry of 2.0s [7] including Library 2.0 [8], Enterprise 2.0, e-Learning 2.0, Publishing 2.0, Medicine 2.0, Travel 2.0 and even Government 2.0 [9]. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas [6].

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 459–462, 2010. www.springerlink.com

460

A. Moein, M. Malekmohammadi, and K. Youssefi

III. WEB 2.0 AND

E-LEARNING

The shift to Web 2.0 has its counterparts in both elearning technology and methodology. e-Learning 2.0 strongly aims on collaborative nature of learning, about transition from traditional view of e-learning as a technologically driven way to transfer the pre-existing knowledge to the recipients. One of the core methodologies behind eLearning 2.0 is connectivism, concentrating on making connections (i.e. links) among learning resources and people. e-Learning 2.0 brings also strong focus on content syndication, its reuse/re-purposing, adaptation, and personalization [10]. The term "Web 2.0" is used to describe applications that distinguish themselves from previous generations of software by a number of principles. Previous studies showed that Web 2.0 applications can be successfully exploited for technology-enhance learning. However, in-depth analyses of the relationship between Web 2.0 technologies on the one hand and teaching and learning on the other hand are still rare. Web 2.0 is not only well suited for learning but also for research on learning [11].

IV. WEB 2.0 AND MEDICINE While it may be too early to come up with an absolute definition of Medicine 2.0 or Health 2.0, Figure 1 shows a suggested framework, created in the context of a call for papers for the purpose of scoping the Medicine 2.0 congress and this theme issue [13]. The program of the first Medicine 2.0 conference [14] also gives a good idea of what academics feel is relevant to the field [12].

Fig. 1 Medicine 2.0 Map (with some current exemplary applications and service)

According to the model depicted in Figure 1, five major aspects (ideas, themes) emerge from Web 2.0 in health, health care, medicine, and science, which will outlive the specific tools and services offered. These emerging and recurring themes are (as displayed in the center of Figure 1): • • • • •

Social Networking Participation Apomediation Collaboration Openness

While “Web 2.0”, “Medicine 2.0”, and “Health 2.0” are terms that should probably be avoided in academic discourse, any discussion and evaluations concerning the impact and effectiveness of Web 2.0 technologies should be framed around these themes [12]. Figure 1 also depicts the three main user groups of current Medicine 2.0 applications as a triangle: consumers/patients, health professionals, and biomedical researchers. While each of these user groups have received a different level of “formal” training, even end users (consumer, patients) can be seen as experts and—according to the Web 2.0 philosophy—their collective wisdom can and should be harnessed: “the health professional is an expert in identifying disease, while the patient is an expert in experiencing it” [15]. Current Medicine 2.0 applications can be situated somewhere in this triangle space, usually at one of the corners of the triangle, depending on which user group they are primarily targeting. However, the ideal Medicine 2.0 application would actually try to connect different user groups and foster collaboration between different user groups (for example, engaging the public in the biomedical research process), and thus move more towards the center of the triangle. Putting it all together, the original definition of Medicine 2.0—as originally proposed in the context of soliciting submissions for the theme issue and the conference—was as follows [13]: Medicine 2.0 applications, services and tools are Webbased services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtualreality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups [12]. There is however also a broader idea behind Medicine 2.0 or “second generation medicine”: the notion that healthcare systems need to move away from hospital-based medicine, focus on promoting health, provide healthcare in people's own homes, and empower consumers to take responsibility for their own health—much in line with what

IFMBE Proceedings Vol. 32

An Introduction to the Next Generation of Radiology in the Web 2.0 World

others and I have previously written about the field of consumer health informatics [16] (of which many Medicine 2.0 applications are prime examples). Thus, in this broader sense, Medicine 2.0 also stands for a new, better health system, which emphasizes collaboration, participation, apomediation, and openness, as opposed to the traditional, hierarchical, closed structures within health care and medicine [12].

V. WEB 2.0 AND RADIOLOGY One of the more profound changes to radiology and healthcare will come from online collaboration or Web 2.0 tools. The new logic of peer-to-peer sharing takes the role of radiology consulting to another level of peer contact and collaboration. What this means for radiology is the possibility of a convergence of teleradiology and online collaboration which will further leverage expert relationships across medical science. Collective knowledge on this scale could help to develop greatly enhanced peer review and quality standards, provide a platform for checking real-time drug interactions, monitoring patient reactions and new interventional techniques and services, as well as provide a knowledge base of support between residents and attending physicians [17]. Specific to Radiology and Web 2.0, there are many social networking and community, wikis and search engine websites, such as: • •

• • • • • • • •

Webicina: Practicing Medicine in the Web 2.0 Era [18] MyPACS: A web-based teaching file authoring tool for Radiologists and related professionals that allows easy uploading of images and descriptive information from any computer with web access [19] AuntMinnie: Provides the first comprehensive community Internet site for Radiologists and related professionals in the Medical Imaging industry [20] DiagnosticImaging: Daily news, announcements and conference reports [21] radRounds: Connecting Radiology, Enhancing collaboration, education and networking [22] Radiopaedia: The online collaborative Radiology resource and encyclopedia [23] Radiolopolis: The international Radiology network and professional Radiology community [24] RadsWiki: More than 3000 articles focusing on numerous sub-fields of Radiology [25] Yottalook: Radiology references, teaching files and peer-reviewed images [26] RadiologySearch: A special search engine that is dedicated to find Radiological content [27].

461

The growth of online medical information and online physician-to-physician collaboration and social networking will put pressure on the future shape of radiology service. The developing picture to keep in mind is: DICOM and highspeed networks continuing to improve speed and delivery of complex radiology studies; teleradiology is evolving into globalized full service radiology; and online physician and patient social networking broadening clinical collaboration for doctors and access to medical knowledge for consumers [17]. One of the common Web 2.0 architecture patterns is “structured information”, the advent of XML and the ability to apply customized tagging to specific elements has led to the rise of syntaxes commonly referred to as microformats. These are small formats with highly specialized abilities to mark up precise information within documents. The use of such formats, in conjunction with the rise of XHTML, lets Internet users address content at a much more granular level than ordinary HTML. The XML Friends Network (XFN) format is a good example of this pattern [1]. The term “structured reporting” in radiology means different things to different people and DICOM Structured Report (SR) is used widely as a standard mechanism for presenting capturing, transmitting and exchanging information in diagnostic medical imaging. Known methods and systems for presenting a DICOM Structured Report includes, for example, using a DICOM SR viewer, available on the Advantage Windows (AW) review workstation, which enables exporting the report to formats such as HTML, XML, plain text and PDF formats to facilitate generating a hard copy of the reports being presented [28,30].

VI. RADIOLOGY AND E-LEARNING 2.0 The appeal of online education and distance learning as an educational alternative is ever increasing. To support and accommodate the over-specialized knowledge available by different experts, information technology can be employed to develop virtual distributed pools of autonomous specialized educational modules and provide the mechanisms for retrieving and sharing them [29]. We suggest for present and evaluate a new learning environment model based on Web 2.0 applications. We assume that the technological change introduced by Web 2.0 tools has also caused a cultural change in terms of dealing with types of communication, knowledge and learning. The goal is to design and development of a web-based e-Learning 2.0 application to assist all medical imaging informatics and other professional healthcare based on DICOM Working Group 18 (WG=18) which is extend the DICOM Standard with respect to clinical trials information and the storage of

IFMBE Proceedings Vol. 32

462

A. Moein, M. Malekmohammadi, and K. Youssefi

images for educational and research purposes; to identify attributes necessary for use in clinical trials (e.g., client, clinical protocol, site number) and technique-related attributes [31]. In this overview there is an introduction of the concepts of e-Learning 2.0 and Personal Learning Environments, along with their main aspects of autonomy, creativity and networking, and relate them to the didactics of constructivism and connectivism. The requirements and basic functional components for the development of our particular Web 2.0 learning environment are derived from these. As a result, we have an advanced PACS-based imaging informatics e-Learning 2.0 module that assists users to improve their skills by working on this system. The key point of this research, design and development is implementing of a system with all applicable features in the Web 2.0 world.

VII. CONCLUSIONS There is no doubt that modern computer technology and the Internet create an incredible ability to find the proverbial needle in a haystack in medicine as in other endeavors. The broader possibilities represented by this capability are encapsulated by the concept of Web 2.0, a vast, distributed network of individuals who openly share information and technology. Whereas the initial phase of the internet included many static pages created by individuals or private interests, Web 2.0 represents an interactive, collaborative, constantly evolving network of information reflecting communication among many different people. The benefits derive from open access and sharing of information. An ecological and a Web 2.0 perspective of e-learning provides new ways of thinking about how people learn with technology and also how new learning opportunities are offered by new technology. These perspectives highlight the importance of developing connections between a wide variety of learning resources, containing both codified and tacit knowledge. New adaptive technology has the potential to create personalized, yet collective, learning. The future implications for e-learning in medical education are considered.

REFERENCES 1. Governor J, Hinchcliffe D, Nickull D (2009) Web 2.0 Architectures, O’Reilly books 2. Sethi S K (2008) Web 2.0 and Radiology, The Internet Journal of Radiology, Vol 8 Num 2 3. Tim O'Reilly (2005) What Is Web 2.0, Design Patterns and Business Models for the Next Generation of Software 4. Hinchcliffe D (2006) The State of Web 2.0 5. Greenemeier L, Gaudin S (2007) Amid The Rush To Web 2.0, Some Words Of Warning, InformationWeek

6. Web 2.0 at Wikipedia, the Free Encyclopedia, Available at: http://en.wikipedia.org/wiki/Web_2.0 7. Schick S (2005) I Second that Emotion, IT Business, Canada 8. Miller P (2008) Library 2.0: The Challenge of Disruptive Innovation 9. Eggers W D (2005) Government 2.0: Using Technology to Improve Education, Cut Red Tape, Reduce Gridlock, and Enhance Democracy, Rowman & Littlefield Publishers 10. Drasil P, Pitner T, e-Learning 2.0: Methodology, Technology and Solutions 11. Ullrich C, Borau K, Luo H, Tan X, Shen L, Shen R (2008) Why Web 2.0 is Good for Learning and for Research: Principles and Prototypes, International World Wide Web Conference, Proceeding of the 17th international conference on World Wide Web, social networks: applications and infrastructures, pages: 705-714, ISBN:978-1-60558-085-2 12. Eysenbach G, MD, MPH (2008) Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness, J Med Internet Res;10(3):e22 DOI:10.2196/jmir.1030, Available at: http:// www.jmir.org/2008/3/e22/ 13. Eysenbach G (2008) Medicine 2.0 Congress Website Launched (and: Definition of Medicine 2.0 / Health 2.0), Available at: http://gunthereysenbach.blogspot.com/2008/03/medicine-20-congress -website-launched.html 14. Eysenbach G (2008) Medicine 2.0 Final Program, Medicine 2.0 Congress, Available at: http://www.medicine20congress.com/ocs/ schedule.php 15. Kathryn P, Davison & James W. Pennebaker (1997) Virtual narratives: illness representations in online support groups, Petrie, K.J., Weinman, J.A. (Eds.), Perceptions of health and illness: Current research and applications, Harwood Academic Publishers:463-486 16. Eysenbach G (2000) Consumer Health Informatics, BMJ;320(7251): 1713-1716 17. Sappington R W, PhD (ABD) (2008) Leading Radiology Services in the Age of Teleradiology, Wikinomics, and Online Medical Information, Radiology Management J:34-42 18. Webicina at http://www.webicina.com 19. MyPACS at http://www.mypacs.net 20. AuntMinnie at http://www.auntminnie.com 21. DiagnosticImaging at http://www.diagnosticimaging.com 22. radRounds at http://www.radrounds.com 23. Radiopaedia at http://radiopaedia.org 24. Radiolopolis at http://www.radiolopolis.com 25. RadsWiki at http://www.radswiki.net 26. Yottalook at http://www.yottalook.com 27. RadiologySearch at http://www.radiologysearch.net 28. Clunie D A (2000) DICOM Structured Reporting, PixelMed Publishing, Bangor, Pennsylvania, Library of Congress Card Number: 00191700, ISBN 0-9701369-0-0 29. Bamidis P D, Konstantinidis S, Papadelis C L, Perantoni E, Styliadis C, Kourtidou-Papadeli C, Kourtidou-Papadeli C, Pappas C (2008) An e-learning platform for Aerospace Medicine, Hippokratia; 12(Suppl 1):15–22 30. Moein A, Youssefi K (2009) A Novel Method to Study DICOM Tags and Definitions for Structured Report and Image Analysis Purposes, 25th Southern Biomedical Engineering Conference, IFMBE Proceedings, Miami, Florida, USA, 15–17 May, 2009, pp 73–74 31. American College of Radiology, National Electrical Manufacturers Association (ACR-NEMA), "DICOM official website" at http://medical.nema.org/ Author: Ali Moein Institute: Azad University, Science and Research Branch City: Tehran Country: Iran Email: [emailprotected]

IFMBE Proceedings Vol. 32

Novel Detection Method for Monitoring of Dental Caries Using Single Digital Subtraction Radiography J.H. Park1,2, Y.S. Choi3, G.J. Lee1,2, S. Choi1,2, K.S. Kim1,2, D.H. Park1,2, I. Cho1,2, and H.K. Park1,2,* 1

Department of Biomedical Engineering, School of Medicine, Kyung Hee University, Seoul, Korea 2 Healthcare Industry Research Institute, Kyung Hee University, Seoul, Korea 3 Department of Oral and Maxillofacial Radiology, Institute of Oral Biology, School of Dentistry, Kyung Hee University, Seoul, Korea Abstract— This study suggested a novel detection method for monitoring of dental caries based on pixel gray values in digital subtraction radiography images from single dental images of patient’s with dental caries. The advantage of single digital subtraction radiography (SDSR) is knowing the status of current teeth with caries without requiring a second digital radiograph. Digital subtraction is currently used in radiographic studies of periapical lesions or other dental disorders that have been treated and whose progress must be evaluated over time. SDSR is a novel detection method for caries that detects dental mass changes from only one dental radiograph. Subjects were chosen among patients who were diagnosed with dental caries from an intraoral X-ray system, we study marks the points of emphasis in hidden dental caries in dental X-ray images from 11 subjects. For each caries lesion that was diagnosed, a mean pixel value was obtained from a SDSR using a scale ranging from 0 to 255 gray values. The image mean variable of the tooth was 71.99 (± 25.64) and 3.25 (± 0.85) (P < 0.0001) for caries and healthy tissue, respectively. SDSR was found to be a novel detection method that uses single dental images of patients to mark the points of emphasis in hidden dental caries. Keywords— SDSR, dental caries, intraoral X-ray.

I. INTRODUCTION Radiologic images have two dimension of three dimensional reality, hence, the images of different anatomical structures are superimposed on each other and, thus, make it difficult to detect the lesions [1,2]. The protective outer surface of anatomic crown is made up of enamel. Dental caries is the disease process of decay in which acid formed from carbohydrate, aided by streptococci mutans bacteria, attacks tooth surface [3]. Digital subtraction radiography(DSR) is a method that can resolve deficiencies and increase the diagnostic accuracy [4]. The subtraction methods was introduced by B.G.Zeides des Plantes in the 1920s. Subtraction image is performed to suppress background features and to reduce the background complexity, compress the dynamic range, and amplify small differences by superimposing the scenes obtained at different times [5]. Subtraction radiography was introduced to dentistry in 1980s [4]. It is used to compare standardized radiographs

taken at sequential examination visits. All unchanged structures are subtracted and these areas are displayed in a neutral gray shade in the subtraction image; regions that have changed are displayed in darker or lighter shades of gray [6]. For radiographic dentinal lesions, the fraction for surfaces with cavitation has been reported to range between 50 and 90% [7]. Recurrent caries is more accurately detected with subtraction techniques. The dynamic nature of caries remineralization/demineralization also could be explored with reliable digital subtraction techniques [8]. This digital subtraction method, although commonly used in clinical dental research, has yet to be applied in clinical caries diagnosis by general practitioners because of the difficulty of image registration. Hence the purposed of this study was to novel detection method of proximal caries, based on pixel grey values in digital subtraction radiography images from single dental image of patient, used for monitoring of dental caries.

II. METHODOLOGY A. Tooth Images Selected Study subjects were chosen from among the patients who were diagnosed as having proximal dental caries from intraoral X-ray system at the Dental Medical Center, Kyung Hee University. The digital radiographs were acquired using an intraoral X-ray system by Heliodent DS (Sirona Dental System Gmbh, Bensheim, Germany) and storage phosphor plates by Kodak RVG 6100 system. The digital images receptor were 1140 * 1920 pixels (dimentions of active area: 27 * 36 mm) with true image resolution, 256 gray levels, and were capable of providing more than 20 lp/mm of spatial resolution. Each of images was taken using the system setup with a 12-inch cone operating at 60 kVp, 7 mA, and 0.32 second. B. Proposed Novel Method of Image Subtraction Digital subtraction radiography is a technique that allows us to determine quantitative changes in radiographs. The

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 463–465, 2010. www.springerlink.com

464

J.H. Park et al.

premise is quite simple. A radiographic image is generated before a particular treatment is performed. At some time after the treatment, another image is generated. The resultant image shows only the changes that have occurred and subtracts those components of the image that are unchanged. The magnitude of the changes can then be measured by evaluating the histogram (graphic depiction of the distribution of gray levels) of the resultant image. Direct digital imaging has been a great help in the quest to take the technique of digital subtraction radiography out of the laboratory setting and actually use it clinically. Fig. 1 shows the flowchart of novel method for proximal caries detection from single dental image of patient by digital subtraction radiography. The X-ray dental image is first subjected to a image preprocessing. This image preprocessing is used to reduce the background noise which is come from lookup table and also to prepare the image for further processing such as image subtraction.

Fig. 2 Perceptual image of hidden dental caries using SDSR. (a) Original image and (b) perceptual reverse image of the hidden dental caries using SDSR Fig. 2(b) is a reversed image of the caries detected image and detected dental mass changes by SDSR from the Fig. 2(a) original image. In this study, the advantage of SDSR is knowing the status of current teeth with caries without requiring a second digital radiograph.

Fig. 3 Result for the image mean variable of the carious and healthy tissue of the same tooth using SDSR

Fig. 1 Novel detection method for monitoring of dental caries from a patient’s single dental image by SDSR

III. RESULTS AND DISCUSSIONS Fig. 2 shows the result images of novel method for proximal caries detection from single dental image of patient according to the novel detection method of dental caries. The X-ray dental image of Fig. 2(a) was first subjected to image preprocessing. Fig. 2(a) shows that the carious area was not visibly clear about the state of caries.

To evaluate the contrast in detecting dental caries as a function of the histogram, the measurements of the relative difference between carious and healthy teeth was defined. Figure 3 shows the results of the image mean variables of carious and healthy tissue from the same tooth (N=11) using SDSR. The image mean variable of the tooth was 71.99 (± 25.64) and 3.25 (± 0.85) (P < 0.0001) for caries and healthy tissue, respectively. SDSR was found to be a novel detection method that uses single dental images of patients to mark the points of emphasis in hidden dental caries.

IV. CONCLUSIONS Pixel gray value measurements in subtraction radiography images constitute a suitable complementary method for

IFMBE Proceedings Vol. 32

Novel Detection Method for Monitoring of Dental Caries Using Single Digital Subtraction Radiography

monitoring outcomes of remineralization. This digital image subtraction method, although commonly used in clinical dental research, has not yet routinely been applied in clinical caries diagnosis by general practitioners, mainly because of the difficulty of image registration, i.e., aligning the second radiograph with the first. Hence, this study were to design a novel detection method of proximal caries, based on pixel gray values in digital subtraction radiography images from a patient’s single dental image, used for monitoring of dental caries. In SDSR, the image is used to mark the points of emphasis in hidden dental caries, hence the novel monitoring method of dental caries in this study. Compared to the carious and healthy tissue, the image mean value for gray values of the dental image showed statistically significant difference between caries and sound (p < 0.0001). It has been demonstrated that this SDSR is a new detection method for dental caries using single dental images of patients.

REFERENCES 1. Matteson SR, Deahl ST. (1996) Advanced imaging methods. Crit Rev Oral Biol Med 7:346-395 2. Christgau M, Hiller KA.(1998) Quantitative digital subtraction radiography for the determination of small changes in bone thickness. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 85:462-472 3. Woelfel J. B. (1990) Dental Anatomy: its Relevance to Dentistry, 4th Ed., Malvern, PA, Lee&Febiger 4. Bragger U. (1988) Digital imaging in periodontal radiography. A review. J Clin Periodontol 15:551-557 5. Woo BMS, Zee K-Y. (2003) In vitro calibration and validation of a digital subtraction radiography system using scanned images. J Clin Periodontol 30: 114-118 6. Reddy MS, Wang IC. (1999) Radiographic determinants of implant performance. Adv Dent Res 13:136-145 7. Ratledge DK, Kidd EAM, Beighton D (2001) A clinical and microbiological study of approximal carious lesions. Part 1: The relationship between cavitation, radiographic lesion depth, the site specific gingival index and the level of infection of the dentine. Caries Res 35:3-7 8. Bader J, Shugars D.(1993) Need for change in standards of caries diagnosis epidemiology and health services research perspective. J Dent Educ 57:415-421 The corresponding author:

ACKNOWLEDGMENT This research was supported by the research fund from Seoul R&BD (Grant # CR070054).

465

Author: Hun-Kuk Park Institute: Kyung Hee University Street: 1 Hoeki-dong, Dongdaemun-gu City: Seoul Country: Korea Email: [emailprotected]

IFMBE Proceedings Vol. 32

Targeted Delivery of Molecular Probes for in vivo Electron Paramagnetic Resonance Imaging S.R. Burks1,2, E.D. Barth3, S.S. Martin2,4, G.M. Rosen1,5, H.J. Halpern3, and J.P.Y. Kao1,2 1

Center for Biomedical Engineering and Technology, University of Maryland, and Medical Biotechnology Center, University of Maryland Biotechnology Institute, and Center for EPR Imaging In Vivo Physiology, University of Maryland, Baltimore, USA 2 Department of Physiology, University of Maryland, Baltimore, USA 3 Department of Radiation Oncology and Center for EPR Imaging In Vivo Physiology, University of Chicago, Chicago, USA 4 Marlene and Stewart Greenebaum Cancer Center, University of Maryland, Baltimore, USA 5 Department of Pharmaceutical Sciences, University of Maryland, Baltimore, USA

Abstract— With recent advances in electron paramagnetic resonance imaging (EPRI), in vivo visualization of a physiologically distinct tissue (e.g., a tumor) has become a real possibility. EPRI could be a powerful imaging modality to detect metastatic lesions and report tissue-specific physiological information. Approximately 25–30% of breast tumors overexpress the Human Epidermal Growth Factor Receptor 2 (HER2). HER2overexpressing breast tumors are proliferative, metastatic, and have poor clinical prognoses. We have developed a novel mechanism for selective in vivo delivery of “spin probes” (molecular probes for EPRI) to Hc7 cells, which are MCF7 breast cancer cells engineered to overexpress HER2. Spin probes can be encapsulated in anti-HER2 immunoliposomes at high concentration (>100 mM). At such concentrations, the spectroscopic signal of spin probes is severely “quenched”—a process analogous to the self-quenching of fluorophores. This makes the intact immunoliposomes spectroscopically “dark” and thus invisible by EPRI. Tumor-specific contrast is generated after selective endocytosis of anti-HER2 immunoliposomes. Intracellular degradation of endocytosed liposomes liberates the spin probes from the liposomal lumen. Once de-quenched by dilution into the much-larger cellular volume, the spin probes regain their spectral signal and make the cells visible by EPRI. Through uptake of immunoliposomes, Hc7 cells can achieve an intracellular spin probe concentration of ~750 μM. Using imaging phantoms, we verify that this concentration of spin probes is easily imageable by EPRI. We have optimized immunoliposomes for in vivo applications by increasing their persistence in circulation to maximize tumor targeting. Through near-infrared fluorescence imaging of tumor-bearing animals, we demonstrate that optimized anti-HER2 immunoliposomes selectively target Hc7 tumors in vivo, enabling highcontrast imaging with minimal background. This work lays the foundation for imaging Hc7 tumors with EPRI. Keywords— Electron paramagnetic resonance, Breast cancer, HER2, Liposomes, Nitroxides, Imaging.

I. INTRODUCTION Very-low-frequency electron paramagnetic resonance imaging (EPRI) is an attractive emerging modality for

imaging metastatic breast tumor lesions. EPRI can detect and image paramagnetic species in vivo and in real time [1]. Endogenous paramagnetic molecules are too scarce to be detected by EPRI; therefore, exogenous “spin probes” such as nitroxides must be used to label features of interest. EPRI using nitroxides offers the advantage that it is a magnetic resonance imaging modality capable of reporting cellular physiology; therefore, in addition to localizing a tumor, they can also report on its physiological status. We previously synthesized nitroxides that are wellretained by cells and thus exhibit long-lived intracellular signals that can be imaged by EPRI [2,3]. We have also shown that nitroxides, like fluorophores, can be encapsulated in liposomes at high concentrations (>100 mM), and show concentration-dependent signal quenching. Thus, intact liposomes containing quenched probes have attenuated spectral signals and are spectroscopically "dark". After endocytosis by cells, however, lysis of the liposomes liberates and dilutes the encapsulated probes into the cell; the resulting dequenching of the probe signal renders the cell visible [4]. Encapsulation of probes at high concentration minimizes background signal from unendocytosed liposomes and creates a cell-activated contrast-generating mechanism. By itself, however, liposomal delivery is limited by the inability to deliver probe molecules selectively to a particular cell type. As a tool for delivering imaging agents to a physiologically distinct tissue such as a breast tumor, liposomes must be targetable—i.e., they must incorporate features that enable selective uptake in a tissue of interest, but not in other, indifferent, tissues. Liposomal surfaces can be readily decorated with moieties that target them to a specific tissue. For example, immunoliposomes, bearing surface-conjugated antibody fragments, can target distinct antigens. Specifically, immunoliposomes targeted against the human epidermal growth factor receptor 2 (HER2) have been used to enhance delivery of chemotherapeutics to HER2-expressing tumors [5]. We have previously demonstrated that Hc7

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 466–469, 2010. www.springerlink.com

Targeted Delivery of Molecular Probes for in vivo Electron Paramagnetic Resonance Imaging

cells, which are MCF7 breast tumor cells engineered to overexpress HER2, selectively endocytose immunoliposomes containing quenched fluorescein, which leads to bright intracellular fluorescence in vitro. MCF7 cells, which express only a low, physiological level of HER2, do not accumulate significant fluorescence [6]. Analogously, we have shown that immunoliposomes encapsulating quenched concentrations of nitroxide can deliver ~750 µM nitroxide intracellularly to Hc7 cells upon endocytosis, while contributing minimal background signal. Using immunoliposomes as delivery vehicles in vivo requires additional considerations. Liposomes are rapidly cleared from the circulation by the reticulo-endothelial system (RES). Incorporating into the liposomes a small proportion of lipid conjugated to poly(ethyleneglycol) (PEG) retards clearance by the RES [7]. The longer circulation times of such sterically-stabilized, “PEGylated” liposomes enhance their targeting potential in vivo. We demonstrate here that sterically-stabilized liposomes are more persistent in circulation than classical liposomes, which lack PEG. We also show that anti-HER2 immunolipomes encapsulating quenched Indocyanine Green (ICG) generate high-contrast fluorescence images of Hc7 tumors in vivo. Through the use of EPRI tissue phantom models, we demonstrate the feasibility of this targeting approach to be used with EPRI.

II. MATERIALS AND METHODS A. General Materials and Methods Dipotassium (2,2,5,5-tetramethylpyrrolidin-1-oxyl-3-ylmethyl)amine-N,N-diacetate was synthesized as described previously (Rosen, Burks et al. 2005). ICG was from Sigma (St. Louis, MO). Lipids were from Avanti Polar Lipids (Alabaster, AL); cell culture media and biochemicals were from Invitrogen (Grand Island, NY). Mice were from Harlan (Indianapolis, IN). Herceptin was a gift from Dr. Katherine Tkaczuk (University of Maryland, Baltimore). Data analyses and presentation were performed with Origin 8.0 (OriginLabs, Northampton, MA), Living Image 3.0 (Caliper Life Sciences, Hopkinton, MA), and Matlab 2010a (The Mathworks, Natick, MA). Hc7 cells (gift from Angela M. Brodie, University of Maryland, Baltimore) were maintained at 37°C under 5% CO2, in Dulbecco’s Modified Eagle Medium (DMEM) supplemented with 10% (v/v) fetal bovine serum (FBS), 2 mM L-glutamine, Pen/Strep (100 U/mL penicillin, 100 µg/mL streptomycin) and 500 µg/mL hygromycin B. Anti-HER2 immunoliposomes were prepared as previously described [7], and comprised 1,2-distearoylphosphatidylcholine (DSPC), cholesterol (Chol), ammonium

467

1, 2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[poly (ethyleneglycol)2000] (PEG-PE), and ammonium 1,2distearoyl-sn-glycero-3-phosphoethanolamine-N-[maleimidepoly(ethyleneglycol)2000] (PE-PEG-maleimide) in the molar ratio 3 DSPC : 2 Chol : 0.1 PEG-PE : 0.06 PE-PEGmaleimide. B. Animals and Hc7 Tumor Inoculation Female NIH Swiss or NOD-SCID mice (5–6 weeks of age) were used for experimentation. SCID mice were previously ovariectomized by the vendor. At least 48 hr prior to tumor inoculation, estrogen pellets (2.5 mg / 60-day release, Innovative Research of America, Sarasota, FL) were implanted in SCID mice. Hc7 cells (2 x 107 suspended in 0.1 mL DPBS) were subcutaneously injected into the legs of SCID mice. Tumors were allowed to grow for ~3 weeks prior to imaging. C. Clearance of Liposomally-Encapsulated Nitroxides from Circulation For pharmaco*kinetic measurements, NIH Swiss mice received intravenous injections of liposomes with or without PEG-PE at a dose of 3.75 µmoles of encapsulated nitroxide/kg body weight. At various times, blood was drawn from the mouse, diluted into 1 mL of deionized H2O (resistivity, 18.3 MΩ), and subjected to 3 freeze/thaw cycles. Samples were measured for nitroxide content by EPR spectroscopy and Na+ content using a Na+-selective electrode (model no. 8411BN, Fisher Scientific, Hampton, NH). EPR measurements were normalized to plasma Na+ content. EPR spectroscopy was performed on an X-band EPR spectrometer (model E-109, Varian Inc., Palo Alto, CA). D. Imaging For in vivo fluorescence imaging, SCID mice bearing Hc7 tumors were injected with anti-HER2 immunoliposomes encapsulating 1 mM ICG (2.5 µmol ICG/kg body weight). At 3 hr post-injection, ICG fluorescence in the mice was imaged (IVIS 200 optical imager, Caliper Life Sciences, Hopkinton, MA) at the following settings: acquisition time, 4 s; f-stop, 1; and medium binning (8×8 pixels). For EPR imaging, an agarose cylinder (4% w/v) was prepared as described previously [4]. It measured 6 mm in diameter, 10.2 mm in length, and was impregnated with 400 µM nitroxide. The cylinder was sealed in polyvinylsiloxane dental impression material (GC Dental Products, Kasugai, Japan) and fixed in the cavity (19 mm diameter) of an EPR imaging spectrometer. Continuous-wave EPR image data acquisition, reconstruction, and analysis were performed according to established procedures [4].

IFMBE Proceedings Vol. 32

468

S.R. Burks et al.

III. RESULTS A. Clearance of Nitroxide-Containing Liposomes To assess the improvement in circulatory retention of sterically-stabilized liposomes, mice (n = 3) were given classical liposomes lacking PEG-PE or sterically-stabilized liposomes containing 5 mole–% PEG-PE; both types of liposomes encapsulated 150 mM nitroxide. At various times, blood was drawn from the mice. Because liposomes in the blood contained quenched nitroxides, samples were subjected to repeated freeze-thaw cycles to lyse the liposomes and dequench the nitroxide spectral signal. The nitroxide signal in each sample was assayed by EPR spectroscopy. Clearance of classical liposomes is best fit by a firstorder exponential decay with a time constant of t1/e = 6.9 ± 5.45 hr (or equivalently, a half-life of t1/2 = 4.1 ± 3.27 hr)— implying that classical liposomes would be entirely eliminated from circulation by ~20 hr. Sterically-stabilized liposomes, however, persisted much longer (t1/e = 17.5 ± 2.87 hr, t1/2 = 10.5 ± 1.54 hr). Incorporating PEG-PE into the liposomal formulation extends circulation times by ~2.5fold, so that even after 50 hr, ~10% of the original nitroxide signal remains in the circulation. B. In Vivo Fluorescence Imaging of Hc7 Tumors In order to determine the ability for anti-HER2 immunoliposomes to target and generate contrast in Hc7 tumors in vivo, Hc7 tumor-bearing mice were given intravenous injections of ICG-containing immunoliposomes and imaged for ICG fluorescence 3 hr post-injection. A representative fluorescence image is shown in Fig 1. In the tumor loci (indicated by red arrows), dequenching of the ICG resulted in intense tumor-associated fluorescence with minimal background signal arising from the surrounding tissue. After imaging, the mouse was euthanized and the spleen, liver, kidneys, and tumors were dissected and imaged for ICG fluorescence ex vivo (data not shown). As expected, organs associated with clearance of liposomes and ICG (i.e., spleen, liver, and kidneys) also accumulated imageable fluorescence signals. C. EPR Imaging of a Nitroxide-Containing Agarose Cylinder Having demonstrated that immunoliposomes are highly selective for Hc7 cells in vivo and that they can deliver ~750 µM nitroxide intracellularly to Hc7 cells in vitro [6], we investigated whether this concentration would be sufficient for EPRI of Hc7 tumors. An agarose cylinder (6 mm diameter, 10.2 mm length) was impregnated with 400 µM nitroxide and imaged by EPRI. A cross-sectional view of

Fig. 1 In vivo fluorescence imaging of Hc7 tumors SCID mouse bearing two Hc7 tumors (red arrows) imaged 3 hr post-injection with anti-HER2 immunoliposomes encapsulating 1 mM ICG. Tumors are imaged with an SNR of 180 the reconstructed image is shown in Fig 2. The geometry of the phantom is faithfully reproduced in the image, the signal-to-noise ratio (SNR) of the image is 109, and the resolution of the image is 2.5 ± 0.19 mm. Therefore, should Hc7 tumors accumulate similar concentrations of nitroxides as Hc7 cells do in vitro, they should be easily imaged by EPRI.

IV. DISCUSSION We have demonstrated here that immunoliposomes can be engineered to persist in circulation, thereby maximizing their tumor-targeting potential. Long-lived immunoliposomes are highly selective for Hc7 tumors in vivo and generate sufficient fluorescence signals for high-contrast optical imaging of tumors in vivo. We have demonstrated through EPR imaging phantom models that should Hc7 cells accumulate concentrations of nitroxides in vivo similar to those previously reported in vitro, they should be routinely imageable by EPRI. There are factors that could limit the feasibility of this approach using EPRI. Liposomes access the tumor volume through the vasculature. While a previous study of antiHER2 immunoliposomes in a xenograft model showed uniform micro-distribution within tumors [8], the macrodistribution may be inhom*ogeneous owing to differential vascularization throughout the tumor volume. This would result in liposomes accessing only a fraction of the tumor volume, and despite the very high intracellular concentrations that are potentially achievable with immunoliposomal delivery, the total nitroxide that is delivered to the tumor

IFMBE Proceedings Vol. 32

Targeted Delivery of Molecular Probes for in vivo Electron Paramagnetic Resonance Imaging

469

selectively, combined with our current efforts aimed at optimizing nitroxide molecular structure for EPRI, bodes well for high-contrast EPRI of HER2-overexpressing tumors.

V. CONCLUSIONS

Fig. 2 EPR image of nitroxide-containing phantom Cross-sectional view of an agarose cylinder (4%, 6 mm diameter) containing 400 µM nitroxide. Cylinder is imaged with an SNR of 109, image resolution is 2.5 ± 0.19 mm. Axis labels are in cm

may still be modest. This motivates additional improvements to SNR. SNR in EPRI can be improved by delivering more nitroxide molecules to cells, and by optimizing the spectroscopic properties of the nitroxides themselves. Liposomes of 100-nm outer diameter are near-optimal in vivo. Larger-diameter liposomes exhibit increased circulatory clearance and reduced extravasation, both of which offset the advantage of the larger luminal volume. However, nitroxides can be improved through rational design. First, nitroxides that are zwitterionic at physiologic pH are highly water-soluble but would require no counter ions, which increase the osmolarity of the encapsulated solution but contribute no imageable signal. At physiological pH, zwitterionic nitroxides can be encapsulated at 300 mM, twice the concentration of the mono-anionic nitroxide used in this study. Second, deuterium and 15N-substituted nitroxides have narrower spectral peaks and correspondingly larger peak amplitudes. Preliminary studies indicate that isotopic substitution increases the EPR peak amplitude by close to 10-fold (data not shown). The combination of these two improvements implies a 20-fold increase in the measureable nitroxide signal that could be generated in the tumor. Such a signal enhancement which greatly increase the feasibility of visualizing Hc7 tumors in vivo by EPRI. EPRI is an emergent imaging modality that could offer sensitive detection of HER2-overexpressing tumors, as well as useful insight into their physiology. The demonstration that it is possible to use anti-HER2 immunoliposomes to deliver imageable concentrations of imaging probes to HER2-overexpressing tumors

The circulating lifetime of anti-HER2 immunoliposomes is extended by surface modification with PEG. Stericallystabilized immunoliposomes encapsulating quenched ICG are highly selective for Hc7 tumors in vivo and are capable of generating robust fluorescence in Hc7 tumors with minimal background in circulation. Nitroxide-containing tissue phantom models containing µM concentrations of nitroxide are easily visualized by EPRI, further suggesting that if Hc7 tumors accumulate similar concentrations through immunoliposome targeting and delivery, they too should be imageable by EPRI.

ACKNOWLEDGMENT This work was supported by National Institutes of Health Grants GM-56481 (JPYK), P41-EB-2034 (GMR and HJH), CA-98575 (HJH), and CA124704-03 (SSM).

REFERENCES [1] Halpern HJ, Spencer DP, Vanpolen J et al. (1989) Imaging radiofrequency electron-spin-resonance spectrometer with high-resolution and sensitivity for in vivo measurements. Rev Sci Instr 60:1040-1050 [2] Rosen GM, Burks SR, Kohr MJ, et al. (2005) Synthesis and biological testing of aminoxyls designed for long-term retention by living cells. Org Biomol Chem 3:645-648 [3] Kao JP, Barth ED, Burks SR, et al. (2007) Very-low-frequency electron paramagnetic resonance (EPR) imaging of nitroxide-loaded cells. Magn Reson Med 58:850-854 [4] Burks SR, Barth ED, Halpern HJ, et al. (2009) Cellular uptake of electron paramagnetic resonance imaging probes through endocytosis of liposomes. Biochim Biophys Acta 1788:2301-2308 [5] Park JW, Kirpotin DB, Hong K, et al. (2001) Tumor targeting using anti-her2 immunoliposomes. J Control Release 74:95-113 [6] Burks SR, Macedo LF, Barth ED, et al. (2010) Anti-HER2 immunoliposomes for selective delivery of electron paramagnetic resonance imaging probes to HER2-overexpressing breast tumor cells Breast Cancer Res Treat In Press [7] Woodle MC and Lasic DD. (1992) Sterically stabilized liposomes. Biochim.Biophys.Acta 1113:171-199 [8] Kirpotin DB, Drummond DC, Shao Y et al (2006) Antibody targeting of long-circulating lipidic nanoparticles does not increase tumor localization but does increase internalization in animal models. Cancer Res 66:6732–6740

IFMBE Proceedings Vol. 32

New Tools for Image-Based Mesh Generation of 3D Imaging Data P.G. Young1, D. Raymont1, V. Bui Xuan2, and R.T. Cotton2 1

School of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK 2 Software Development & Technical Services, Simpleware, Exeter, UK

Abstract— There has been increasing interest in the generation of models for computational modeling from imaging modalities such as MRI and CT. Although a wide range of mesh generation techniques are available, these on the whole have not been developed for meshing from 3D imaging data. The paper will discuss new tools specific to image-based meshing, and their interface with commercial FEA and CFD packages. Automated mesh generation techniques can generate models with millions of nodes. Reducing the size of the model can have a dramatic impact on computation time and memory and CPU (Central Processing Unit) requirements. A proprietary technique allowing the setting of different density zones throughout the model was developed, reducing of the number of elements required to capture a given geometry, while increasing the density around areas of greater interest. Micro-architectures can be generated to conform to existing domains. To control the mechanical properties of the structure, a re-iso-surfacing technique combined with a bisection algorithm is shown to allow density variations and porosity control. The concept of a relative density map, to represent the desired relative densities in the micro-architecture, is introduced. Both 2D and 3D examples of functionally and arbitrary graded structures are given. Finally, a new hom*ogenization algorithm has been implemented by using the meshing techniques described above and parallel processing strategies innovatively to compute orthotropic mechanical properties from higher resolution scans, enhancing the value of micro level information, and enabling it to be used for macro models on desktop computers. The ability to automatically convert any 3D image dataset into high quality meshes is becoming the new modus operandi for anatomical analysis. New tools for image-based modeling have been demonstrated, improving the ease of generating meshes for computational mechanics and opening up areas of research that would not be possible otherwise.

meshes directly and robustly from the image data have been proposed in recent years, however, there are a range of issues related to image processing of the data which still need to be addressed. The paper will discuss issues specific to image-based meshing, and will focus on techniques specific to image-based mesh generation and will also discuss the interface with commercial FEA and CFD packages (e.g. ANSYS, Fluent, LS-DYNA, etc). A number of examples that cover different applications within and outside the Computational Biomechanics field will be presented.

II. CAD-BASED VERSUS IMAGE-BASED MESHING ‘CAD-based approaches’ use the scan data to define the surface of the domain and then create elements within this defined boundary [1]. These techniques do not easily allow for more than one domain to be meshed as multiple surfaces generated are often non-conforming with gaps or overlaps at interfaces where two or more structures meet (cf. Fig. 1). The ‘image-based approach’ presented by the authors is a more direct way, as it combines the geometric detection and mesh creation stages in one process. The technique generates 3D hexahedral or tetrahedral elements throughout the volume of the domain [2], thus creating the mesh directly with conforming multipart surfaces (cf. Fig. 1). This technique has been implemented as a set of computer codes (ScanIP, +ScanFE and +ScanCAD).

Keywords— image-based meshing, image processing, mesh generation, finite element analysis.

I. INTRODUCTION There has been increasing interest in the generation of models appropriate for computational modeling from imaging modalities such as MRI and CT. Novel methods of generating the required finite element and finite volume

Fig. 1 Original segmentation (left), non-conforming (centre) and conforming multipart surface reconstruction (right) A. Robustness and Accuracy Modeling complex topologies with possibly hundreds of disconnected domains (e.g. inclusions in a matrix), via a

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 470–472, 2010. www.springerlink.com

New Tools for Image-Based Mesh Generation of 3D Imaging Data

471

CAD-based approach is virtually intractable. For the same problem, an image-based meshing approach is by contrast remarkably straightforward, robust, accurate and efficient. Meshes can be generated automatically and exhibit imagebased accuracy with domain boundaries of the finite element model lying exactly on the iso-surfaces, taking into account partial volume effects and providing sub-voxel accuracy (cf. Fig. 2).

different density zones throughout the model, effectively reducing the overall number of elements required to capture a given geometry, while allowing to increase the mesh density around areas of greater interest if necessary. An example is given in Fig. 3 where the head of the femur has a higher mesh density than the rest of the femur.

Fig. 3 Femur with higher mesh density at the head a

b

c

B. Generation of Micro-architectures Fig. 2 a)

3

Original image, unsmoothed (203,238 mm ); b) Traditionally smoothed (180,605 mm3, Δvolume = -11.14%); c) Smoothed with Simpleware’s smoothing algorithm (202,534 mm3, Δvolume = -0.35%)

B. Anti-aliasing and Smoothing Where anti-aliasing and smoothing is applied to the segmented volumes, the presented technique is both topology and volume preserving. If appropriate algorithms are not used, smoothing and anti-aliasing the data can introduce significant errors in the reconstructed geometry and topology. Most implemented smoothing algorithms are not volume preserving and can lead to shrinkage of convex hulls and topological changes. Whilst this is not particularly problematic when the purpose is merely enhanced visualization, the influence can be dramatic when the resultant models are used for metrology or simulation purposes.

Micro-architectures can be generated to conform to an existing domain. To control the mechanical properties of the structure a re-iso-surfacing technique is shown to allow density variations throughout the architecture. Combined with a bisection algorithm the technique allows microarchitectures to be generated with a specific porosity. The authors introduce the concept of a relative density map, a method for representing the desired relative densities in the micro-architecture where both the minimum and maximum porosity values can be specified. Examples are given of functionally graded and arbitrary graded structures in both 2D and 3D as shown in Fig. 4.

III. NEW DEVELOPMENTS IN IMAGE-BASED MESHING A. Generation of Variable Density Meshes Automated mesh generation techniques can easily generate millions of nodes, which ultimately lead to larger models to solve. The number of nodes is directly linked with the computational complexity of a problem. Reducing the size of the model can therefore have a dramatic impact on the computation time, as well as on the memory and CPU (Central Processing Unit) requirements. The authors have developed a proprietary technique which allows the setting of

a)

b)

Fig. 4 Integration of lattice structure into a) CAD structure and b) image data C. hom*ogenization Finally, a new hom*ogenization algorithm has been implemented. Through the innovative use of the meshing

IFMBE Proceedings Vol. 32

472

P.G. Young et al.

techniques developed by the authors and parallel processing strategies, it is possible to compute orthotropic mechanical properties from higher resolution scans. This enhances the value of the information obtained at micro level, enabling it to be effectively used for macro models on desktop computers.

REFERENCES 1. Cebral J, Loehner R (2001) From medical images to anatomically accurate finite element grids. Int.J.Num.Methods Eng., 51:985-1008. 2. Young P, Beresford-West T, et al. (2008) An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions of the Royal Society A, 366:3155-3173. Author address:

IV. CONCLUSIONS The ability to automatically convert any 3D image dataset into high quality meshes is becoming the new modus operandi for anatomical analysis. Techniques have been developed for the automatic generation of volumetric meshes from 3D image data including image datasets of complex structures composed of two or more distinct domains and including complex interfacial mechanics. The techniques guarantee the generation of robust, low distortion meshes from 3D data sets for use in finite element analysis (FEA), computer aided design (CAD) and rapid prototyping (RP). The ease and accuracy with which models can be generated opens up a wide range of previously difficult or intractable problems to numerical analysis.

Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Philippe G Young University of Exeter North Park Road Exeter United Kingdom [emailprotected]

Characterization of Speed and Accuracy of a Nonrigid Registration Accelerator on Pre- and Intraprocedural Images Raj Shekhar1, William Plishker1, Sheng Xu2, Jochen Kruecker2, Peng Lei1, Aradhana Venkatesan3, and Bradford Wood3 1

Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, Maryland, USA 2 Philips Research North America, Briarcliff Manor, New York, USA 3 Center for Interventional Oncology, Clinical Center and National Cancer Institute, National Institutes of Health, Bethesda, Maryland

Abstract— Targeting of FDG-avid PET-visible but CTinvisible lesions with CT is, by definition, challenging and is a source of uncertainty in many percutaneous ablative procedures. Preprocedural PET has been overlaid on intraprocedural CT to improve the visualization of such lesions. Such an approach depends on registration of PET and CT, but current registration methods remain mostly slow and manual and do not optimally account for deformation of abdominothoracic anatomy. We have developed a fully automatic, nonrigid image registration technology that is accelerated on 3 field programmable gate array (FPGA) chips to execute in 5 min. The speed and accuracy of registration, which are critical to eventual clinical adoption, were recorded. On average, the FPGA-based registration took 53 s and agreed with the existing solution at the lesion center to within 6.6 mm (1.7 voxels). We prove the feasibility of fast and accurate nonrigid registration capable of enabling efficient multimodality PET-CT image-guided interventional procedures. The sub-minute speed of the FPGA method is important for clinical efficiency and on-demand intraprocedural PET-CT registration. The accuracy of FPGA-based nonrigid image registration is also acceptable. Our next steps are to introduce the FPGA registration in the clinic and re-test its accuracy, speed, and effectiveness in a larger patient population. When fully developed and tested, our approach might improve target visualization and thus the precision, safety, and outcomes of interventional radiology procedures. Keywords— Medicine, image registration, FPGA.

I. INTRODUCTION Because of the >90% sensitivity of fluorodeoxyglucose (FDG) positron emission tomography (PET) and the approximately 50% sensitivity of computed tomography (CT), many metastatic lesions are visible in PET but invisible in CT. Practical considerations discourage the use of a PET scanner for interventional procedures but allow the use of CT to provide intraprocedural imaging guidance in most biopsies and percutaneous ablations. Targeting of PETvisible but CT-invisible lesions with CT is, by definition, challenging and a source of uncertainty. Preprocedural PET has been overlaid on intraprocedural CT in an attempt to improve the intraprocedural visualization of such lesions, a process that employs PET data for needle placement. Such an approach depends on the registration of PET and CT, and current registration methods remain mostly manual or semi-automatic, making registration slow and clinically less practical. Moreover, most of these methods assume rigid-body to ease the registration task. While rigid-body assumptions are appropriate for some registration scenarios, they do not properly account for deformation of soft tissue such as those found in the thoracic and abdominal anatomy. Such motion may be the result of respiration, scanning position, or changes to shape or size over time. To properly address this nonrigid registration problem, we have developed a fully automatic, nonrigid image registration technology [1] that is accelerated on three field programmable gate array (FPGA) chips such that registration requires only 1 minute or less in execution time. To characterize this solution for image guided intervention scenarios, we have tested here the accuracy and speed of PET and CT registration by our FPGA-based registration method. The quality and speed of our solution are indicative of the feasibility of multimodality PET-CT imaging guidance during interventional procedures.

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 473–476, 2010. www.springerlink.com

474

Raj Shekhar et al.

II. BACKGROUND

PC, making it a practical solution for image navigation in terms of size and integration.

Intensity based image registration algorithms rely on correlations between voxel (3D pixel) intensities and not on landmark detection, which makes them robust and computationally intensive. A transformation is often described as a deformation field, in which all parts of the image to be deformed (the floating image) have a specific deformation such that they align with the other image (the reference image). Construction of the deformation field can start from a just few parameters in the case of rigid registration or from a set of control points which capture the nonuniformity of nonrigid registration. Regardless of the representation, a transformationcontains the information necessary to deform all of the voxels in the floating image into the reference image space, aligning the features of the floating image with that of the reference. This transformed image can be compared to the reference image using a variety of similarity metrics, such as mean squared difference (MSD) or mutual information. For iterative approaches, the similarity value is returned so that it may guide the optimization engine towards successively better solutions. Problem parameters may change during run-time to improve speed and accuracy. While image registration is a computationally intensive problem, it can be readily accelerated by exploiting parallelism. Researchers have applied a variety of innovative approaches at different levels of the problem. To bring more accurate and more robust image registration algorithms into the clinical setting, a significant body of research has been dedicated to acceleration techniques for image registration. Graphics processors (GPUs) have been utilized in various image registration algorithms [2,3]. Clusters and other multiprocessor systems are often the target of higher level parallelism [4,5]. The Cell processor has been used to accelerate rigid registration using mutual information with additional speedup from just processing a subset of the total voxels [6]. However, these solutions are unable to take advantage of the lowest levels of parallelism in the application. Our FPGA based solution utilizes an architecture specifically designed to exploit the significant amount of parallelism that occurs in processing a single voxel. Ithas the potential to provide the speed of registration necessary for clinical viability. Furthermore our FPGA based solution is constructed with off-the-shelf parts and housed in a standard

III. METHODS Archived abdominothoracic preprocedural PET and intraprocedural CT images of 7 patients with FDG-avid lesions, who either underwent biopsy or were treated with radiofrequency ablation, werere-registered using the FPGAbased registration method. During the actual procedure, a PET-CT fusion image to assist in needle placement was created for each of the patients using a prototype electromagnetic tool tracking system and manual/semi-automatic rigid image registration,which lasted several minutes. In one case with standalone preprocedural PET, the PET image was registered directly with intraprocedural CT. In 6 other cases, hybrid preprocedural PET/CT was available. The CT component of the hybrid PET/CT image was registered with intraprocedural CT in these cases. The resulting transformation, when applied to the PET component, helped create the registered preprocedural PET–intraprocedural CT fusion image. The speed and accuracy of registration, which are critical to eventual clinical adoption, were recorded. For the archived cases, a single-point registration solution at the lesion center from the manual/semi-automatic method was available. Although the FPGA-based registration method nonrigidly registered every voxel in the image, the accuracy of the matching single-point registration result was compared for the 2 methods (rigid vs. nonrigid). The timing results were acquired by a software timer was started before registration began and stopped after the final transformation vector was derived. The setup and teardown overheads (such as image transfer)were omitted from these results, as we believe a final implementation integrated with a surgical navigation system would have negligible overhead. A final integrated solution would embed the FPGA acceleration engine in the image processing pipeline, making the core registration time the primary contributor to latency.Therefore we quote this core registration time, as it would be the additional viewing latency to the preprocedural data in the procedure room.

IFMBE Proceedings Vol. 32

Characterization of Speed and Accuracy of a Nonrigid Registration Accelerator on Pre- and Intraprocedural Images

475

voxel size. On average, the FPGA-based registration took 53 s and agreed with the existing solution at the target center to within 6.6 mm (1.7 voxels).

IV. RESULTS Figure 1 shows a particular case showing preprocedural and intraprocedural images. In this case, both preprocedural PET and CT were available, so the intraprocedural and preprocedural CTs were registered to produce the deformation field that could be applied to the PET image. At the time of the procedure the patient was in a slightly different pose, causing a rigid misalignment with the body, but also much of the soft tissue deformed as well, necessitating nonrigid registration for the best possible automated results. The result of the CT-CT registration is shown here, and the same deformation field can be applied to the PET image to produce an overlay of the preprocedural PET on the intraprocedural CT. Table 1 summarizes data characteristics and speed and accuracy results. Time required for registration is a direct function of the size of 2 images and the degree of starting misalignment. The accuracy of image registration depends considerably on image resolution, which correlates with

V. CONCLUSION We have proven the feasibility of fast and accurate nonrigid registration capable of enabling efficient multimodality PET-CT imaging-guided interventional procedures. Compared with >5 min required to register PET and CT images either manually or semi-automatically, the automatic FPGA-based registration requires RPMA (0.1149)) across all regions, and standard deviations of the same order as the mean path coefficient values. Thus Figure 2 was judged to be a poor model fit, and was modified to the model in Figure 3, where the SMA and its associated connections were eliminated. The mean and standard deviation of the path weights at combination level 5 (maximum number of combinations) for all subjects are shown in Table 1. In order to assess the consistency of the path coefficients, and also ensure good coefficients of variation measures across all permutations, we use two metrics representing these values. These values are tabulated as shown in. θ represents the slope of the line joining mean path coefficients corresponding to each combination level. A lower slope value shows that the path coefficient mean remained the same across all permutation levels. The γ Table 1 Path coefficients [mean (std)] for all subjects at combination level 5 (126 combinations) LM1-> RM1

LPMA-> LM1

LPMA-> RPMA

RM1-> LM1

RPMA-> LPMA

Fig. 3 Modified anatomic model value is related to the coefficient of variation. It is calculated using:

∑σ γ= ∑μ

2.0122 -0.8269 1.0377 1.9006 0.4559 -0.3259 (0.0983) (0.1648) (0.0692) (0.0728) (0.0381) (0.0729)

SUBJ 2

0.5721 0.6693 0.6349 0.3045 0.7926 0.2380 (0.0416) (0.0370) (0.0615) (0.0624) (0.0491) (0.0316)

SUBJ 3

0.2257 0.1825 0.7485 0.6468 0.6922 0.6074 (0.0991) (0.0848) (0.0322) (0.0840) (0.0387) (0.0827)

SUBJ 4

0.7505 0.2006 1.0814 0.5175 1.1579 0.2397 (0.4170) (0.2329) (0.3297) (0.4360) (0.3361) (0.2157)

SUBJ 5

0.7593 0.5105 0.7677 0.2932 0.5219 0.3838 (0.0604) (0.0393) (0.1613) (0.0405) (0.1950) (0.0421)

SUBJ 6

0.8826 0.3181 0.8010 0.2668 0.5749 0.2190 (0.0469) (0.0368) (0.0320) (0.0206) (0.0325) (0.0572)

SUBJ 7

1.1202 0.7717 1.2571 -0.1822 0.3285 0.3339 (0.0548) (0.0439) (0.0408) (0.0349) (0.0292) (0.0276)

2

Connections were judged as reliable according to the γ values as:

A = (γ < 0.05) + 2 * (| θ |< 0.01025)

(3)

θ and (4)

The reliability metric, A, was chosen so that the change in path coefficients across all combinations was less than 0.1 (stable across combinations), and that the sum of squares variance was within 5% of the sum of squares mean (stable within combination).

IV. DISCUSSION

RPMA-> RM1

SUBJ 1

2

While the initial causal model for the resting state motor network included SMA, the path coefficients from SEM analysis and permutation tests showed that the model was a poor fit for our data. Thus, the SMA and its associated connections were removed from the model. In terms of the resting state network, we hypothesize that this might be due to the function of the SMA, which is complex programming and planning of motor functions, and thus does not exert a strong causal influence on the other ROIs in the network, although it is highly correlated (Figure 1). The weak influence of SMA on the model might also be due to other exogenous variables, which were not considered in our model. For example, the basal ganglia, which has significant anatomical and functional connections with the SMA might be acting upon it, and in turn resulting the poor path coefficients [9]. From our results with the modified causal model, LPMA>RPMA connection is the strongest across all the subjects, followed by the RPMA->LPMA connection, based on the

IFMBE Proceedings Vol. 32

484

T. Kavallappa et al.

Table 2 Reliability indices of path weights for all subjects. SUB J 1

2

3

4

5

6

7

LM1->RM1

θ = 0.2028

γ

= 0.3238 Mean_PC = 0.8970 θ = 0.0168 γ = 0.0107 Mean_PC = 0.6016 θ = 0.0318 γ = 0.1179 Mean_PC = 0.3294 θ = 0. 0153 γ = 0.1208 Mean_PC = 0.6903 θ = 0.0011 γ = 0.0129 Mean_PC = 0.7564 θ = 0.0030 γ = 0.0181 Mean_PC = 0.8810 θ = 0.0386 γ = 0.0079 Mean_PC = 0.9721

LPMA->LM1

θ

= 0.1758 γ = 0.3404 Mean_PC = -0.1001 θ = 0.0367 γ = 0.0130 Mean_PC = 0.5712 θ = 0.0206 γ = 0.2626 Mean_PC = 0.2507 θ = 0.0019 γ = 0.3868 Mean_PC = 0.2354 θ = 0.0282 γ = 0.0390 Mean_PC = 0.4228 θ = 0.0125 γ = 0.0938 Mean_PC = 0.2913 θ = 0.0756 γ = 0.0162 Mean_PC = 0.5534

LPMA->RPMA

θ

= 0.0117 γ = 0.0101 Mean_PC = 1.0643 θ = 0.0535 γ = 0.0315 Mean_PC = 0.7710 θ = 0.0164 γ = 0.0050 Mean_PC = 0.7886 θ = 0.0405 γ = 0.0588 Mean_PC = 0.8071 θ = 0.0283 γ = 0.1335 Mean_PC = 0.8753 θ = 0.0541 γ = 0.0084 Mean_PC = 0.9427 θ = 0.0276 γ = 0.0046 Mean_PC = 1.1604

path coefficients. We see that the path coefficients corresponding to RPMA->RM1, and LM1->RM1 are the most reliable across subjects, while path coefficients corresponding to LPMA->LM1, RM1->LM1 have the highest variability between subjects. Observing the between subject reliability of the path weights, subjects 1 and 4 display the least reliable connections, while subjects 5, 6 and 7 seem to have the more reliable path weights. From our results, it does not appear that SEM of the investigated cortical motor network is totally reliable, thus caution should be taken when interpreting similar analyses. However, while the magnitude of the mean path weights vary across subjects, some of the connections (LPMA>RPMA, RPMA->LPMA, LM1->RM1) appear relatively close in magnitude. This variability in mean path weights may not be entirely unexpected, since our study population is somewhat heterogeneous in age, gender, and experience (i.e., motor training). Furthermore, reliable output of SEM is highly dependent on having a correct a priori anatomical model. While we have used a subset of a well-characterized anatomical network, absolute validation of our model is difficult, particularly when recent studies have shown that functional connections exist even when anatomical connections cannot be identified [10] and that different anatomical regions are active in resting versus active networks [11]. In conclusion, we have shown that some connections within the cortical motor network are consistent, but were not reproducible across the entire population, and were highly dependent on the chosen model. Therefore, care should not only be taken when interpreting the significance of path weights between regions, selection of a functionally correct network model is critical to reliable findings in SEM. Future studies will explore more extensive models of

A = 3, A=2, A=1

RM1->LM1

θ

= 0.2513 γ = 0.1016 Mean_PC = 0.8741 θ = 0.0423 γ = 0.0295 Mean_PC = 0.4233 θ = 0.0184 γ = 0.0279 Mean_PC = 0.5796 θ = 0.0040 γ = 0.3776 Mean_PC = 0.4303 θ = 0.0276 γ = 0.0206 Mean_PC = 0.3712 θ = 0.0215 γ = 0.0252 Mean_PC = 0.3245 θ = 0.0844 γ = 0.1189 Mean_PC = 0.0539

RPMA->LPMA

θ

= 0.0241 γ = 0.0183 Mean_PC = 0.5188 θ = 0.0202 γ = 0.0102 Mean_PC = 0.7308 θ = 0.0161 γ = 0.0070 Mean_PC = 0.7304 θ = 0.0421 γ = 0.0545 Mean_PC = 0.8790 θ = 0.0498 γ = 0.1250 Mean_PC = 0.6743 θ = 0.0015 γ = 0.0063 Mean_PC = 0.5716 θ = 0.0447 γ = 0.0101 Mean_PC = 0.4469

RPMA->RM1

θ

= 0.0918 = 0.9566 Mean_PC = 0.1781 θ = 0.0048 γ = 0.0714 Mean_PC = 0.2352 θ = 0.0081 γ = 0.0389 Mean_PC = 0.5729 θ = 0.0200 γ = 0.2033 Mean_PC = 0.3049 θ = 0.0061 γ = 0.0601 Mean_PC = 0.3652 θ = 0.0206 γ = 0.1282 Mean_PC = 0.2832 θ = 0.0011 γ = 0.0164 Mean_PC = 0.3458

γ

motor connectivity to determine the most relevant network to guide clinical implementation of these techniques.

REFERENCES 1. S. Ogawa, D.W. Tank, R. Menon, J.M. Ellermann, S.G. Kim, H. Merkle, and K. Ugurbil. Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. Proc Natl Acad Sci U S A, 89(13): 5951–5955, Jul 1992. 2. B.P. Rogers, V.L. Morgan, A.T. Newton, and J.C. Gore. Assessing functional connectivity in the human brain by fmri. Magn Reson Imaging, 25(10): 1347–1357, Dec 2007. 3. A.R McIntosh, C.L. Grady, L.G. Ungerleider, J.V. Haxby, S.I. Rapoport, and B. Horwitz. Network analysis of cortical visual pathways mapped with PET. J. Neurosci., Feb 1994; 14: 655 - 666. 4. A.R.Mcintosh, F. Gonzalez-Lima, 1994. Structural equation modeling and its application to network analysis in functional brain imaging. Human Brain Mapping 2 (1-2), 2-22. 5. M. Lindquist. The Statistical Analysis of fMRI Data. Statistical Science, 23(4): 439–464, 2008. 6. R.W. Cox. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research, 29:162-173, 1996. 7. N. Sharma, J.C. Baron, and J. B. Rowe. Motor imagery after stroke: relating outcome to motor network connectivity. Ann Neurol, 66(5): 604–616, Nov 2009. 8. G. Chen, D.R. Glen, J.L. Stein, A.S. Meyer-Lindenberg, Z.S. Saad, R.W. Cox. Model Validation and Automated Search in FMRI Path Analysis: A Fast Open-Source Tool for Structural Equation Modeling, Human Brain Mapping Conference, 2007. 9. E. Kandel, J. Schwartz, Principles of Neural Science. 2nd edition. Elsevier Science Publishing Company (1985). 10. C. J. Honey, O. Sp*rns, L. Cammoun, X. Gigandet, J. P. Thiran, R. Meuli, and P. Hagmann. Predicting human resting-state functional connectivity from structural connectivity. Proc Natl Acad Sci U S A, 106(6):2035–2040, Feb 2009. 11. Newton AT, Morgan VL, Gore JC. Task demand modulation of steady-state functional connectivity to primary motor cortex. Hum Brain Mapp. 2007; 28:663-672.

IFMBE Proceedings Vol. 32

Quantitative Characterization of Radiofrequency Ablation Lesions in Tissue Using Optical Coherence Tomography J. Wierwille1, A. McMillan3, R. Gullapalli3, J. Desai2, and Y. Chen1 1 Department of Bioengineering Department of Mechanical Engineering University of Maryland, College Park, USA 3 Department of Radiology, University of Maryland, Baltimore, USA 2

Abstract— Radiofrequency (RF) ablation is a widely used therapeutic intervention in the management of many cancers including breast and liver cancers. While optimum delivery of RF energy can be monitored through real-time temperature mapping from MR imaging, it is possible that microscopic foci of malignant tissue may be left untreated. Such microscopic tissue may be beyond the resolution limit of MRI and may result in sub-optimal treatment efficacy which may lead to cancer recurrence. Thus, for optimal treatment it is beneficial to incorporate higher-resolution techniques such as optical coherence tomography (OCT) that can surpass the resolution afforded by MRI into the micron range in situ and can provide histopathology level information on tissue in vivo. In this preliminary study, in order to test the feasibility of this approach, we characterized tissue properties such as the scattering coefficient (μs), in non-ablated and ablated bovine skeletal muscle ex vivo. The estimated μs of the non-ablated muscle region was 1.9641 mm-1 while the μs of the ablated region was 5.8998 mm-1 (p t *.

(2)

and

System (1) describes the pre-treatment phase, while system (2) follows the dynamics after the treatment starts. The difference between both systems is the introduction of H , the drug-induced death rate. In both systems, L , D , and u denote the birth, death, and mutation rates, respectively. We assume that 0 ≤ D < L and 0 < u = 1 . The initial conditions for the pre-treatment system (1) are given as constants N (0) = N 0 ≠ 0 and R(0) = 0 . The initial conditions for the system (2) are N (t * ) and R(t * ) , which are the solutions of (1) at t = t * . In this model we assume that both the wild-type and the resistant (mutated) cells have the same birth and death rates, as assumed in Komarova [9]. The time of the beginning of the treatment, t * , is related to the size of the tumor at that time. If we assume that the total number of cancer cells at time t * is M , we can use the exponential growth of cancer and the fact that the mutation rate u is relatively small, to estimate t * as t* ≈

1 M ln . L − D N0

III. ANALYSIS AND RESULTS

R(t * ) = N 0 ut * e( L−D)t ≈ *

Mu ln(M / N 0 ) . L(1 − D / L)

(4)

Here M is the total number of cancer cells when the therapy begins. The expression for R(t * ) contains the turnover ratio D / L . Therefore we see that the amount of resistant mutants generated before the beginning of the treatment clearly depends on the turnover rate. The slower the growth of the cancer is (i.e., the closer the turnover rate D / L is to 1) the larger is the amount of pre-treatment drug resistance. Conversely, the faster the tumor grows (i.e., the closer the turnover rate is to zero) the smaller is the resistance that develops prior to the beginning of the treatment. The result is natural since a tumor having a lower death rate will reach detection size with fewer divisions (and therefore fewer mutations) than a tumor with a higher death rate. Now, assume that mutations could be terminated after time t * , the time at which the therapy starts, so that the only drug resistance that is present after t * would be the ''progeny'' of the resistance generated before therapy started. We refer to such resistance as the ''pre-treatment resistance at time t '', where t is the time from the start of the treatment, and denote it by R p (t) . Note that R p (t) is simply the solution of system (1) at time t * that is then multiplied by an exponential term e( L−D )t that accounts for the growth of this resistance during treatment, that is

R p (t) =

Mu ln(M / N 0 ) ( L−D )t e . L(1 − D / L)

(5)

Equation (5) clearly shows how the amount of resistance generated before the beginning of the treatment and present, including its progeny, at any given time afterward depends on the turnover rate. Using the same methods, such dependence can be shown to be present also in the case of a multidrug therapy. We would like to note the simplicity of our mathematical approach with respect to the much more sophisticated one taken by Komarova. Of course our result is only about the average behavior of the drug resistant population, given our deterministic approach.

(3)

IFMBE Proceedings Vol. 32

554

C. Tomasetti and D. Levy

IV. DISCUSSION

V. CONCLUSIONS

A puzzling issue is the source of the apparent contradiction between our result and the result of Komarova [9]. A possible cause could be found in the different mathematical techniques used: while in this work we use a deterministic approach that deals with numbers of cells, in [9] the quantities of interest are probabilities. Can this be the source of the contradicting results? Clearly, the answer must be negative. The reason for this difference is due to the fact that Komarova studies the probability to have such resistance in the limit, as t → ∞ . It is actually only at t = ∞ that the results of [9] show a lack of dependence of the resistance on the turnover rate (see page 365, equation (49) and the following discussion in [9]). Therefore these results do not hold at any finite time. This result can be further understood by the following argument. Using techniques of branching processes we were able to calculate the probability to have resistant mutants generated before the beginning of the treatment and present, including their progeny, at some given time afterward. This probability is given by the following formula

Our goal was to understand the reasons behind the difference in the results of Komarova [9] for the single and multidrug cases. In order to accomplish this goal we have used a different, much simpler approach, based on an elementary compartmental system of linear ordinary differential equations, rather than on stochastic processes. In particular we wanted to understand if it is true that in the case of a single drug treatment, drug resistance (and therefore treatment success) is independent of the cancer’s turnover rate. We have shown that for the single drug case, Komarova’s results do not hold at any finite time. This is due to the fact that all quantities of interest are defined only as t → ∞ in [9]. The dependence on the turnover rate in the single drug case is simply weaker that the dependence in the multi-drug case. The asymptotic analysis in [9] loses this information.

⎛ ⎛ ⎜ ⎜ L 1 PR (t) = 1 − exp ⎜ −uM ln ⎜ −( L−D)t −( L−D )t De De ⎜ ⎜1− ⎝ ⎝ L

⎞⎞ ⎟⎟ ⎟⎟ . ⎟⎟ ⎠⎠

(6)

Here the time t is measured from the start of the treatment. Once again it is clear that this probability given by (6) does depend on the cancer turnover rate for any finite time t . It is only asymptotically that such dependence will disappear. The strength of such dependence will depend on the actual values of the parameters. Furthermore, the conclusion in [9] that, in the single drug case, the probability of treatment success does not depend on the turnover rate (see page 352, [9]), is related to the definition of a successful treatment as a complete extinction of the tumor as time becomes infinite. Different definitions of a successful treatment (such as allowing tumors not to exceed a certain size or simply considering finite times) will lead to a dependence on the turnover rate also in the single drug case. While from a mathematical point of view, it is a common practice to compute asymptotics as t → ∞ , in our opinion it is more desirable in the problem of drug resistance (and its related concept of treatment success) to study the dynamics for finite time, a time that is at most of the order of several years.

ACKNOWLEDGMENT The authors wish to thank Prof. Dmtry Dolgopyat for his helpful discussions and suggestions. Cristian Tomasetti would like to thank Professor Doron Levy for the advice and financial support. This work was supported in part by the joint NSF/NIGMS program under Grant Number DMS-0758374, and by the National Cancer Institute under Grant Number R01CA130817.

REFERENCES 1. Teicher B A (2006) Cancer drug resistance. Humana Press, Totowa, New Jersey 2. Luria S E, Delbrück M (1943) Mutation of bacteria from virus sensitivity to virus resistance. Genetics 28:491–511 3. Goldie J H, Coldman A J (1979) A mathematical model for relating the drug sensitivity of tumors to their spontaneous mutation rate. Cancer Treat. Rep. 63:1727–1733 4. Goldie J H, Coldman A J, Gudaskas G A (1982) Rationale for the use of alternating non-cross resistant chemotherapy. Cancer Treat. Rep. 66:439-449 5. Goldie J H, Coldman A J (1983) A model for resistance of tumor cells to cancer chemotherapeutic agents. Math. Biosci. 65:291-307 6. Goldie J H, Coldman A J (1998) Drug Resistance in Cancer: Mechanisms and Models. Cambridge University Press, Cambridge 7. Iwasa Y, Nowak M A, Michor F (2006) Evolution of resistance during clonal expansion. Genetics 172: 2557–2566 8. Komarova N, Wodarz D (2005) Drug resistance in cancer: principles of emergence and prevention. Proc. Natl. Acad. Sci. USA 102:9714-9719 9. Komarova N (2006) Stochastic modeling of drug resistance in cancer. J. Theor. Biol. 239:351-36

IFMBE Proceedings Vol. 32

Drug Resistance always Depends on the Turnover Rate Author: Institute: Street: City: Country: Email:

Cristian Tomasetti Mathematics Department, University of Maryland Paint Branch Drive College Park, MD 20742-3289 USA [emailprotected]

IFMBE Proceedings Vol. 32

555

Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation K. Lweesy, L. Fraiwan, M. Al-Shalabi, L. Mohammad, and R. Al-Oglah Jordan University of Science and Technology, Faculty of Engineering, Biomedical Engineering Department, Irbid 22110, Jordan Abstract— This paper describes the design, construction, and evaluation of a three dimensional (3D) ultrasound system to be used to treat different kinds of tumors using high intensity focused ultrasound (HIFU). The system consists of two major parts: an ultrasonic therapy part and a treatment planning part. The ultrasonic therapy part consists of an ultrasound bowl shaped transducer (made from Lead Zirconate Titanate (PZT) and has a resonance frequency of 0.5 MHz), a lossless electrical impedance matching circuit built to ensure maximum electrical power delivery to the transducer, a function generator, and a high power amplifier. The ultrasonic therapy part is responsible for generating a high-power focus at the location of the geometric focus of the bowl shaped ultrasound transducer. The treatment planning part consists of three stepper motors (responsible for moving the setup in the x- y- and z-directions), three high-voltage high-current darlington arrays (to supply the stepper motors with the required voltages and currents), and a C# software to perform the treatment planning. To assess the movement of the treatment planner, each of the three stepper motors was moved forward and backward from end to end. Then the treatment planner was successfully driven to cover cubes of dimensions of 1 x 1 x 1 cm3, 2 x 2 x 2 cm3, 4 x 4 x 4 cm3, and 8 x 8 x 8 cm3, with step sizes 0.5, 1, 2, and 4 mm, respectively. Ex vivo experiments using fresh bovine liver were performed and indicated the capability of the system to generate lesions both on- and offaxis. Lesions at different depths were successfully generated at the intended locations. Temperature distributions were recorded both inside and outside the lesion and indicated that the temperature reached about 60°C inside the lesion and remained below 39°C outside it. Keywords— Geometrically focused transducer, high intensity focused ultrasound, lesion, sonication, treatment planning.

I. INTRODUCTION Cancer is a disease that can affect people from all ages, although the risk of having cancer increases with age. Cancer is responsible for more than 13% of all human deaths. According to the American cancer society, in the year 2007, about 7.6 million people died as a result of cancer worldwide [1]. Different techniques for treating cancer exist, such as surgery [2], chemotherapy [3], radiotherapy [3], microwave therapy [4], and high intensity focused ultrasound (HIFU)

therapy [5]. Surgery, chemotherapy, radiotherapy, and microwave therapy suffer from many drawbacks. As a result HIFU represents a good choice that can non-invasively target different kinds of tumors. In the past two decades, HIFU is getting more attention by different research groups and companies as a noninvasive procedure to treat cancers in different organs, such as kidney, liver, brain, prostate, and breast. Many HIFU devices had been tested with the guidance of magnetic resonance imaging (MRI). These HIFU devices either were unable to cover the whole cancerous volume due to limitations on the steering angle and the maximum depth of penetration (DOP), or used manual movements of single element ultrasound transducers which resulted in inaccurate movements. The purpose of this study was to build a complete and accurate ultrasound system for the treatment of different tumors without the use of any manual movement of the ultrasound transducer.

II. MATERIALS AND METHODS The overall system proposed herein is shown as a block diagram in Figure 1. The system consists of two parts, ultrasonic therapy and treatment planning. The ultrasonic therapy part consists of a single element geometrically focused ultrasound transducer that is driven by a function generator and a power amplifier, and connects to a personal computer (PC). The treatment planning part includes three stepper motors, three darlington arrays which connect to the PC through its parallel port, and a software (C#) to perform the planning. A. Ultrasonic Therapy Part a) Ultrasound Transducer Simulations The pressure and intensity beam profiles of a single element geometrically focused ultrasound transducer were simulated using Huygen’s principle [6], which evaluates the overall generated pressure (P(r ,θ )) or intensity (I (r ,θ )) at a certain point in the medium by dividing the ultrasound transducer into small point sources (known as simple

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 556–559, 2010. www.springerlink.com

Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation

sources) then adding the contributions of these sources to calculate the overall pressure or intensity. Matlab (MathWorks, Inc., USA) simulations were used to calculate both pressure and intensity distributions. Figure 2(a) shows the normalized intensity distribution calculated at an x-z plane (y = 0); a focal point at (x, y, z) = (0, 0, 10) cm is observed. Using the simulated intensity field, the temperature distribution was calculated using the Pennes bioheat transfer equation (BHTE) [7]: ρCt

⎛ ∂ 2T ∂ 2T ∂ 2T ∂T + + = K⎜ ⎜ 2 ∂t ∂y 2 ∂z 2 ⎝ ∂x

⎞ ⎟ − wC (T − T ) + q (x, y, z ) b a ⎟ ⎠

Fig. 1 Overall system block diagram Where Ct is the specific heat of the tissue (3770 J·kg-1· ºC ), K is the thermal conductivity (0.5 W·m-1· ºC-1), T is the temperature at time t at the point x, y, z in °C, Ta is the arterial blood temperature (37°C), w is the perfusion in the tissue (5 kg·m-3· s-1), Cb is the specific heat of the blood (3770 J·kg-1· ºC-1), and q(x, y, z) is the power deposited at the point x, y, z. The power was calculated from the intensity field distribution of the ultrasound transducer, while the BHTE was solved using a numerical finite difference method with the boundary condition temperatures set at 37°C. Figure 2(b) shows the temperature distribution generated by the intensity waveform shown in Figure 2(a). The -1

557

temperature rise at the focal point was found to be around 60°C, while outside the focus it was below 40°C (safe). b) Ultrasound Transducer Construction Several parameters govern the selection of the ultrasound transducer to be used for HIFU, such as material type, geometry, and resonance frequency. The choice of the transducer’s material is crucial, since it has a direct impact on both the electrical and acoustical properties of the transducer. Among the different PZT materials available in the market, PZT 8 and PZT 4 are the best candidates that can handle the high driving electrical powers needed for HIFU. PZT 8 has a low loss factor and a high quality factor compared to PZT 4; as a result, PZT 8 was chosen as the material for the transducer. Based on the simulation results mentioned earlier, a geometrically focused ultrasound transducer with a resonance frequency of 0.5 MHz was chosen in order to allow deep penetration of ultrasound wave to tissue since the DOP is inversely proportional to the resonance frequency. The geometric focus of the transducer was chosen to be 10 cm to allow the treatment of deep cancerous tissue. The electrical impedance of the PZT-8 material alone was measured to be 1.3 kΩ∟25°. This high impedance requires using a low capacitance coaxial cable in order to have both the cable electrical impedance and the PZT-8 electrical impedance in the same range. A two meter coaxial cable with a characteristic impedance of 75 Ω was found to be suitable. The soldering between the coaxial cable and the geometrically focused ultrasound transducer used a low temperature soldering material (Indalloy #1E, Indium Corporation of America, USA) to ensure that the temperature during soldering did not exceed the curie temperature for the PZT-8 material, which is about 310°C. c) Ultrasound Driving Source A sinusoidal signal (0.5 MHz) generated from a function generator was used. The sinusoidal signal was then fed into a 25 W power amplifier (Model 25A250, Amplifier Research, USA) to produce the high power required for HIFU treatments. Usually, in normal blood perfusion rates, a power of 6 W is enough to raise up the temperature at the focal point to 60ºC if the sonication time is set to 2 seconds. d) Electrical Matching Circuit The electrical impedance of the transducer, along with the coaxial cable connected to it, was measured to be 46.32 + j 13.07 Ω. Since this value is far from the optimal value of 50 + j 0 Ω, which is required for maximum power delivery to the load, an LC (L = inductor and C = capacitor) matching circuit with L = 0.21 µH and C = 1.88 nF was designed and built.

IFMBE Proceedings Vol. 32

558

K. Lweesy et al.

transistor-transistor-logic (TTL) signal. The three darlington arrays were connected to the X, Y, and Z motors from one side and to the PC through its parallel port interface from the other side. Figure 3 shows a front view of the built translating system. A C# code was written to move any of the three stepper motors either forward or backward. After each movement, a command instructs the moved stepper motor to stop for a pre-determined period of time, which represents the time delay required to cool down the tissue that lies in front of the transducer after each sonication.

(a)

C. Ex Vivo Experiments To ensure the capability of the system of generating onand off-axis lesions ex vivo, a fresh bovine liver (thickness about 4 cm) was obtained and submerged in a 40 x 40 x 60 cm3 water tank. The bovine liver was placed such that its proximal surface is 6 cm away from the transducer and its distal surface is about 10 cm away from the transducer. The coordinate (0, 0, 0) was set at the center of the ultrasound transducer. The transducer was aimed at a point that lies exactly on the distal liver’s surface (to have a visible lesion), then turned on for 2 seconds. The transducer was then moved off-axis to the locations (1, 1, 10) cm and (-1, -1, 10) cm, and was turned on at each location for 2 seconds.

(b)

Fig.

2 Simulated normalized intensity distribution (a) and temperature distribution (b) for a geometric focus at (0, 0, 10) cm

Stepper motor X

B. Treatment Planning Part The Treatment planning part consists mainly of a three dimensional (3D) translating system that consists of three stepper motors, named X, Y, and Z, and three darlington arrays that connect to the PC through the parallel port which is divided into three sub-ports: data, control, and status. Two six-wire stepper motors (X and Y) and one eightwire stepper motor (Z) were used to move the ultrasound transducer. Since the Z stepper motor is responsible for moving the whole setup, it was chosen to be larger to be able to generate the required torque. All the three stepper motors rotate with a step angle of 1.8º; thus one revolution (about 1 mm horizontal distance) needs 360/1.8 = 200 steps to be completed. Thus the distance resolution (minimum horizontal distance any of the three stepper motors can move) is 1mm/200 = 5 µm. Three high-voltage high-current darlington arrays (ULN2003A, Allegro MicroSystems, Inc., USA) were used because of their ability to provide the stepper motors with high voltages (up to 50 V) and high currents (up to 500 mA). Each darlington array was driven with a 5 V

Stepper motor Y Stepper motor Z

Fig. 3 Front view of the translating system

III. RESULTS The electroacoustic efficiency, which is defined as the output acoustic power divided by the input electric power, was first measured to ensure that the therapeutic ultrasound transducer is capable of delivering enough power to the

IFMBE Proceedings Vol. 32

Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation

tissue. The radiation force technique was used to measure the electroacoustic efficiency, which was found to be 52%. This efficiency can be increased by adding a matching layer to the design. The movement of the 3D translating system was tested first by moving each of the three stepper motors forward and backward from end to end. Then cubes of dimensions 1 x 1 x 1 cm3, 2 x 2 x 2 cm3, 4 x 4 x 4 cm3, and 8 x 8 x 8 cm3, were scanned using step sizes of 0.5, 1, 2, and 4 mm. For a cube of dimensions 1 x 1 x 1 cm3 (i.e., xc = yc = zc = 1 cm), the ultrasound transducer was moved with a step size of 0.5 mm to cover the whole volume. After each step movement of the ultrasound transducer, the fixed hydrophone recorded the voltage, and thus the intensity, generated by the ultrasound transducer. Ex vivo experiments were done to prove the capability of the overall system to generate lesions both on- and off-axis. Three sonications were aimed at (0, 0, 10) cm, (1, 1, 10) cm, and (-1, -1, 10) cm, with the on time of each sonication set to 2 seconds and the time between two consecutive sonications (off time) set to 10 seconds. The result is shown in Figure 4, which indicates the generation of three different lesions. The two off-axis lesions (at (1, 1, 10) cm and (-1, 1, 10) cm) coincide exactly at the intended locations, while the on-axis lesion ((0, 0, 10) cm) was shifted a little bit from its intended location; which might be due to the curvature of the distant liver’s surface.

IV. CONCLUSIONS HIFU is gaining more attention as a noninvasive/minimally invasive approach to treat cancer in different organs such as liver, kidney, brain, prostate, and breast. Most of the previously proposed HIFU devices to noninvasively treat breast cancer either used complex and expensive arrays yet with limited steering angles and DOPs, or used single element transducers that needed to be moved manually (inaccurate) to generate different lesions. The design described herein although used a single element transducer, it had a 3D translating system that can move the focal point accurately and repeatedly with a variable step size as small as 5 µm. Mechanical movements of the 3D translating system and ex vivo experiments were used to prove the capability of the system to generate lesions both on- and off-axis. In conclusion, a 3D HIFU therapeutic system that can be used to treat breast cancer, as well as other tumors, has been designed, built, and tested. The device has shown to give

559

good movement and focusing capabilities: two important parameters that must be considered when designing a HIFU device. Further improvement to the system can be done by incorporating it with MRI guidance.

Fig. 4 Three generated lesions, one on-axis and two off-axis

REFERENCES [1] American Cancer Society, Cancer Statistics, 2007. [2] Early Breast Cancer Trialists’ Collaborative Group (EBCTCG), “Effects of radiotherapy and of differences in the extent of surgery for early breast cancer on local recurrence and 15-year survival: an overview of the randomised trials,” Lancet, 366, 2087–2106, 2005. [3] M. Overgaard, P. Hansen, J. Overgaard, C. Rose, M. Andersson, F. Bach, M. Kjaer, C. Gadeberg, H. Mouridsen, M. Jensen, K. Zedeler, “Postoperative radiotherapy in high-risk premenopausal women with breast cancer who receive adjuvant chemotherapy. Danish Breast Cancer Cooperative Group 82b Trial,” New Engl J Med, 2 337(14):949-955, 1997. [4] G. Vlastos and H. Verkooijen, “Minimally Invasive Approaches for Diagnosis and Treatment of Early-Stage Breast Cancer,” The Oncologist, 12:1–10, 2007. [5] P. Huber, J. Jenne, R. Rastert, I. Simiantonakis, H. Sinn, H. Strittmatter, D. Fournier, M. Wannenmacher, J. Debus, “A New Noninvasive Approach in Breast Cancer Therapy Using Magnetic Resonance Imaging-guided Focused Ultrasound Surgery,” Cancer Res, 61:84418447, 2001. [6] J. Zemanek, “Beam behavior within the nearfield of a vibrating piston,” J Acoust Soc Am, 49:181–191, 1971. [7] H. Pennes, “Analysis of tissue and arterial blood temperatures in the resting human forearm,” J Appl Physiol, 1:93-122, 1948. Author: Khaldon Lweesy Institute: Jordan University of Science and Technology Street: P.O.Box 3030 City: Irbid Country: Jordan Email: [emailprotected]

IFMBE Proceedings Vol. 32

Clinical Applications of Multispectral Imaging Flow Cytometry H. Minderman1, T.C. George2, K.L. O’Loughlin1, and P.K. Wallace1 1

Roswell Park Cancer Institute, Flow and Image Cytometry Facility, Buffalo, USA 2 Amnis Corporation, Seattle, USA

Abstract–– The ImageStream is a flow cytometry-based image analysis platform that acquires up to 12 spatially correlated spectrally-separated images of cells in suspension at rates of up to 1000 cells/sec. By combining the high throughput and multiparameter capability of flow cytometry with the high image content information of microscopy it allows quantitative image analysis in immunophenotypically defined cell populations in statistically robust cell numbers. One area of its clinical application is in the study of cell signal transduction pathways for which the intracellular localization of signaling intermediaries correlate with activity. For example, activation of the nuclear factor-kappaB (NFкB) transcription factor complex is associated with the cytoplasmic to nuclear translocation of p65. To demonstrate this application, the nuclear translocation of p65 following receptor mediated and drug–induced activation of NFκB was studied in human myeloid leukemia cells. TNFα –induced nuclear translocation of p65 was rapid and concentration-dependent, peaking at 30 min of exposure with maximum translocation achieved with concentrations above 5 ng/ml. Daunorubicin (DNR)-induced p65 translocation was concentration-dependent and correlated with DNR–induced apoptosis. The clinical context, the analysis approaches and results will be presented.

B. NFkB Pathway

Keywords–– Quantitative Imaging, Flow Cytometry, NFκB, Signal Transduction.

Many signal transduction pathways that control the activity of oncogenes and tumor suppressor genes implicated in oncogenesis and drug resistance have been characterized in recent years. Signal transduction through these pathways occurs through an intricate interplay between posttranslational protein modifications, intracellular colocalizations and transport between cytoplasm and nucleus of pathway intermediaries. The nuclear factor-kappaB (NFкB) transcription factor complex regulates genes important in cell proliferation, survival and drug resistance. It is held in an inactive state in the cytoplasm by binding to the inhibitor of nuclear factor кB (IкB) and is activated by phosphorylation of IкB by the IкB kinase (IKK) complex which leads to ubiquitinproteasome-mediated degradation of IкB and release of NF-кB for translocation to the nucleus [1-11]. Aberrant constitutive activation of this transcription factor has been implicated in many diseases making it an important therapeutic target. The ability to measure the activity of this pathway by determining the intracellular localization of its pathway intermediaries in the target cells would be an important parameter of response to targeted therapies.

I. INTRODUCTION

II. MATERIAL AND METHODS

A. Imagestream Technology The ImageStream platform is operationally similar to a flow cytometer, but has the ability to generate 12 simultaneous images of each cell analyzed with resolution comparable to that of 60x magnification of a standard fluorescence microscope. Each cell is represented by a dark field image, two bright field images, and up to nine spectrally separated fluorescent images. The novelty of this technology is that it can provide quantitative information not only on the prevalence of molecular targets in a heterogeneous cell population, but also on their localization within the cell, with statistically meaningful numbers. The combination of these capabilities brings statistical robustness to image-based assays.

A. Cell Line Models To demonstrate concentration dependent effects of a receptor-mediated activation of NFкB, ML-1 cells were exposed in vitro for 30 min to a concentration range of TNFα as detailed in the results section. To demonstrate drug-induced activation of NFкB and correlation with druginduced apoptosis, HL60 cells were exposed in vitro for 4h to a concentration-range of daunorubicin (DNR) which has previously been demonstrated to activate NFкB in this model [12]. B. Immunostaining For both cell line models, following drug treatment cells were washed with PBS, fixed (10 min 4% paraformaldehyde), permeabilized (0.1% v/v triton-X in PBS) and stained

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 560–563, 2010. www.springerlink.com

Clinical Applications of Multispectral Imaging Flow Cytometry

For each sample, the bright field, FITC and DRAQ5 images of 10,000 events were collected with the ImageStream. For each cell, the so-called ‘Similarity score’ for the FITC (p65) and the corresponding DRAQ5 (nucleus) images were calculated as a measure for nuclear p65 translocation. The ImageStream analysis software applies features (algorithms) and masking operations (region-finders), to perform image-based analysis [20]. The Similarity (S) score (Fig.1) is a log-transformed Pearson’s correlation coefficient (ρ) of the pixel values of the NFkB and DRAQ5 image pair within the nuclear mask. If NFкB is nuclear localized, its image will be similar to that of the DRAQ5 image and the Sscore will therefore have large positive values. If NF-кB is cytoplasmic, its image will be anti-similar to that of DRAQ5 and the S-score will therefore have large negative values. In Fig.1, examples of the S-score calculation are shown for a typical untranslocated and translocated cell. A frequency distribution plot can then be made for the S- score within a population and relative shifts of these distribution between two populations (e.g. treated ‘t’ versus control ‘c’) can then be calculated using the Fisher’s Discriminant ratio (Rd).

Healthy cell 9

Apoptotic cell 617

100

100

Area_Threshold(M05,C

C. ImageStream Analysis

The % apoptotic HL60 cells following DNR exposure were also determined with the ImageStream by quantifying the number of cells with condensed, fragmented nuclear images after extending the cultures for an additional 48 hrs. Fig.2 shows image representative bright field and corresponding nuclear images for a healthy and apoptotic HL60 cell. Compared to the healthy cells, apoptotic cells have a higher image contrast in the bright field image and because the nuclear fluorescence is more condensed, the area of the 50% brightest fluorescence signal as quantified by an ‘area threshold’ feature is relatively smaller in apoptotic cells than in healthy cells. These 2 parameters are plotted and a gate can then be set for apoptotic cells. Note the shift of distribution in these plots for apoptotic cells compared to the healthy control cells.

Area_Threshold(M05,C

(primary: polyclonal rabbit anti-human p65 antibody (SC-372, Santa Cruz Biotechnology, Santa Cruz, CA), secondary: FITC-conjugated donkey-anti-rabbit (Jackson Immunoresearch, West Grove, PA)). Immediately before acquisition with the ImageStream, cells were counterstained with the DRAQ5 nuclear stain (Axxora, San Diego, CA).

561

80 60 40 20 R4

0 0

10

20

30

40

50

60

70

80 60 40 20 R4

0 0

Contrast_M01_Ch01

10

20

30

40

50

60

70

Contrast_M01_Ch01

Fig. 2 ImageStream analysis of apoptosis

III. RESULTS A. Receptor-Mediated Time and Concentration-Dependent Translocation of NFkB

‘Untranslocated

Similarity Score -2.07

‘Translocated’

Similarity Score +2.82

Fig. 3 summarizes the effects of TNFα exposure on p65 nuclear translocation in ML-1 cells as determined with the ImageStream analysis approach outlined in Fig. 1. The data in both graphs represent average values of four independent replicates of the same experiment. First, the time-dependent effect of TNFα exposure was studied by fixing cells at different time points following the initiation of exposure to 10 ng/ml TNFα. The graph on the left demonstrates that the effect of TNFα on nuclear translocation of p65 is rapid and maximizes at 30 min following the start of exposure. Note that in this model system, prolonging the exposure time beyond 30 min resulted in a decreased nuclear translocation of p65. Next, the concentration-dependent effect of TNFα exposure was studied in this same cell line model for a fixed exposure duration of 30 min. The graph on the right demonstrates that maximum translocation of p65 is achieved with concentrations of 5 ng/ml or higher.

Fig. 1 ImageStream Similarity Score and Rd value IFMBE Proceedings Vol. 32

562

H. Minderman et al. (

)

1.2 1 0.8 0.6 0.4 0.2 0 5 min

10 min

15 min

20 min

25 min

30 min

40 min

50 min

60 min

0.3

1.8 1.6

0.2

1.4 1.2 1 0.8 0.6 0.4 0.2 0 0.1ng

0.5ng

Incubation Time (minutes)

1ng

2.5ng

5ng

7.5ng

10ng

[TNF-alpha]

Similarity Scores (Nucleus / NFkB p65)

Rd treated vs control

Rd treated vs control

2

0.1

0.0

-0.1

-0.2

Fig.

3 Time- (left) and concentration- (right) dependent translocation of p65 in ML-1 cells following exposure to TNFα

-0.3 0

20

40

60

80

100

% Apoptotic Cells

B. DNR-Induced Translocation of NFkB

0.3

Rd (relative to untreated control)

0.2

Similarity Scores (Nucleus /NFkB p65)

The effect of DNR on p65 nuclear translocation in HL60 cells assessed by western blot analysis has been previously described [12]. The DNR concentrations (0.1, 0.25, 0.5 and 1.0 µM) and exposure duration (4h) used in the present study were chosen to replicate the conditions used in this reference. Figure 4 summarizes the data of 5 replicate experiments in which the nuclear p65 translocation in HL60 cells was quantified by the ImageStream approach as outlined in fig.1. The ImageStream analysis revealed that, as was previously described based on western blot analysis [12], the exposure to DNR resulted in a concentrationdependent increase of nuclear p65.

0.1

0.0

-0.1

-0.2

-0.3 0

20

40

60

80

100

% Apoptotic Cells

Fig. 5 Top: Correlation between % DNR-induced apoptotic cells at 48h and the ImageStream analysis of nuclear NFkB translocation (similarity score) induced at 4h in the same cultures of 5 replicate experiments. Bottom: linear regression analysis of the cumulative data. R=0.9

0.5 0.4

IV. DISCUSSION

0.3 0.2 0.1 0.0 -0.1 0.1

1 DNR µM

Fig. 4 DNR-induced nuclear p65 translocation in HL60 cells C. DNR-Induced Apoptosis and Correlation with Nuclear p65 Translocation Next, the % apoptotic cells following 48h exposure to DNR were evaluated using the ImageStream analysis approach as outlined in fig. 2. In figure 5, the data of 5 replicate experiments are summarized by plotting the mean similarity scores (of p65 and nucleus) of the same data set shown in Fig.4 versus the % apoptotic cells. The 5 different colors in Figs. 4 and 5 are associated with corresponding data sets.

Gene transcription is regulated by activity of proteins that are part of a signal transduction cascade. In eukaryotes, the nuclear-cytoplasmic transport of these signaling proteins is one mechanism of regulation of transcription [13, 14]. In cancer cells, the sub-cellular distribution of oncogenes and tumor suppressors is frequently perturbed due to modification of the proteins, defective nuclear-cytoplasmic transport mechanisms or alterations in the nuclear pore complexes through which transport takes place. The sub-cellular localization of specific factors can thus be a determinant of the activity of a given pathway, and also of the efficacy of therapies directed at restoring normal activity. The presented studies focused on the NFкB pathway, but NFкB is only one of an increasing number of well-characterized pathways which can be aberrantly activated in cancer cells, including but not limited to p53 [15], p27 [16], FOXO-family transcription factors [17], INI1 [18] and β-catenin [19]. Conventional methods used to measure nuclear translocation have significant limitations. Biochemical/ molecular techniques are both time-consuming and semiquantitative in nature, and do not provide information

IFMBE Proceedings Vol. 32

Clinical Applications of Multispectral Imaging Flow Cytometry

regarding heterogeneity within a sample. Microscopic visualization remains the most direct method for measurement but suffers from operator bias and small sample size. Recent advancements in imaging instrumentation technology and image processing have allowed numerical scoring of large populations of cells, bringing statistically robustness to the quantitation of nuclear translocation. In this field, the ImageStream platform is unique in that it is a flow cytometry-based technology and thus allows quantitative image analysis of cells in suspension. The analytical approach used in the present studies was developed to allow nuclear translocation measurements in immunologically relevant cell populations, using cross-correlation analysis of fluorescent nuclear and transcription factor images from each object [20]. This approach allows accurate measurement of translocation in cells with small cytoplasmic areas in a dose- and timedependent manner, as well as in subsets of cells within a mixed cell population. The present data demonstrate the accuracy of these measurements with regards to TNFα- and DNR-induced NFkB translocation and its correlation with a biological endpoint (induction of apoptosis). We are currently applying this approach in the study of the effects of the proteasome inhibitor Bortezomib (Velcade) on the activation of NFkB in the treatment of acute myeloid leukemia.

V. CONCLUSIONS The ImageStream technology enables the quantitative study of intracellular (co-)localization of fluorescently labeled molecular targets. The ability to perform this in immunophenotypically defined target cells is a powerful tool to study these parameters as determinants of response to targeted therapies.

ACKNOWLEDGEMENT

563

REFERENCES 1. Baldwin AS (1996) The NF-кB and IкB proteins: new discoveries and insights. Annu Rev immunol 14:649-681 2. Ghosh S, May MJ, Kopp EB. (1998) NF-кB and Rel proteins: evolutionary conserved mediators of immune responses. Annu Rev Immunol 16:225-260 3. Miyamoto S, Verma (1995) IM. RE1/NF-кB/IкB story. Adv Cancer Res 66:255-292 4. Siebenlist U, Franzoso G, Brown K (1994) Structure, regulation and function of NF-кB. Annu Rev Cell Biol 10:405-455 5. Karin M, Ben-Neriah Y (2000) Phosphorylation meets ubiquitination: the control of NF-кB activity. Ann Rev Imm 18:621-663 6. Foo S, Nolan G (1999) NF-кB to the rescue. Trends in genetics 15:229-235 7. Baichwal VR (1997) Baeuerle PA. Activate NF-кB or die ? Curr Biol 7:R94-96 8. Sonenshein GE (1997) Rel/NF-кB transcription factors and the control of apoptosis. Semin Cancer Biol 8:113-119 9. Karin M Cao Y, Greten F et al. (2002) NF-кB in cancer: from innocent bystander to major culprit. Nature Rev Cancer. 2:301-310 10. Gilmore TD, Koedood M et al (1996) Rel/NF-кB/IкB proteins and cancer. Oncogene 13:1367-1378 11. Luque I, Gelinas C (1997) Rel/NF-кB and IкB factors in oncogenesis. Semin Cancer Biol 8:103-11 12. Boland et al: Daunorubicin activates NFkB and induces kBdependent gene expression in HL60 promyelocytic and Jurkat T lymphoma cells. J. Biol. Chem. 272 (20): 12952-12960 13. Fujihara SM, Nadler SG (1998) Modulation of nuclear protein import: a novel means of regulating gene expression. Biochem Pharmacol. 56:157-161 14. Nigg, E A (1997) Nucleocytoplasmic transport: Signals, mechanisms and regulation. Nature 386: 779–787 15. O'Brate A, Giannakakou P (2003) The importance of p53 location: nuclear or cytoplasmic zip code? Drug Resist Updat. 6:313-322 16. Blagosklonny MV (2001) Are p27 and p21 cytoplasmic oncoproteins? Cell Cycle. 1:391-393 17. Jacobs FM, Van der Heide LP, Wijchers PJ et al (2003) FoxO6, a novel member of the FoxO class of transcription factors with distinct shuttling dynamics. J Biol Chem. 278:35959-35967 18. 18. Craig E, Zhang ZK, Davies KP et al (2002) A masked NES in INI1/hSNF5 mediates hCRM1-dependent nuclear export: implications for tumorigenesis. EMBO J. 21:31-42 19. Henderson BR, fa*gotto F (2002) The ins and outs of APC and betacatenin nuclear transport. EMBO Rep. 3:834-839 20. George TC, Fanning SL et al (2006) Quantitative measurement of nuclear translocation events using similarity analysis of multispectral cellular images obtained in flow. J Immunol Methods 311. 117-129

Supported by NIH 1R21-CA12667, 1S10RR022335 and the NCI Cancer Center Support Grant to the Roswell Park Cancer Institute (CA016056).

IFMBE Proceedings Vol. 32

Multispectral Imaging, Image Analysis, and Pathology Richard M. Levenson Brighton Consulting Group, Principal, Brighton,, MA, USA Abstract— Biological systems are complex; multiparameter detection methods such as expression arrays and flow cytometry make this apparent. However, it is increasingly important not just to measure overall expression of specific molecules, but also their spatial distribution--at various scales and while preserving cellular and tissue architectural features. Such high-resolution molecular imaging is technically challenging, especially when signals of interest are co-localized. Moreover, in fluorescence-based methods, sensitivity and quantitative reliability can be compromised by spectral cross-talk between specific labels and also by the presence of autofluorescence commonly present, for example, in formalin-fixed tissues. In brightfield microscopy, problems of overlapping chromogenic signals pose similar imaging difficulties. These challenges can be addressed using commercially available multispectral imaging technologies attached to standard microscope platforms, or alternatively, integrated into whole-slide scanning instruments. However, image analysis is a central and still incompletely solved piece of the entire imaging process. New and evolving machine-learning technologies as well as other image-understanding approaches can create tools that can readily be used to separate image regions into appropriate classes (“cancer”, “stroma”, “inflammation”, e.g.) with (near) clinically acceptable accuracy. By itself this is useful, but can also be combined with specific segmentation and quantitation tools to extract molecular data automatically from appropriate cellular and tissue compartments, information necessary for designing and testing targeted diagnostic and therapeutic reagents. Having tools such as these available will allow pathologists to deliver appropriate quantitative and multiplexed analyses in a reproducible and timely manner. Keywords— image analysis; immunofluorescence; immunohistochemistry; segmentation; multispectral.

I. INTRODUCTION A. New Roles for and Demands on Pathology Demands on pathology as a discipline, and on pathologists in person have multiplied, extending far from the simple post-facto correlations that marked its early years. The pathologist is called upon, of course, to arrive at a correct diagnosis or label for whatever process is manifested in a patient. Beyond that, prognostic information is sought— what, to a high level of precision, will be the clinical outcome? And predictive guidance is desired as well—which

drugs should or in many cases should not be given to an individual patient?

II. QUANTITATIVE MOLECULAR IMAGING To help answer these questions, new molecular targets have been identified for probe development, new labeling reagents have been commercialized, and these developments have been accompanied by advances in imaging technology. As importantly, the biological complexity of the sample has been acknowledged, and the conventional one-marker-at-a-time approach is recognized as inadequate. As a consequence, it is likely that fluorescence-enabled techniques will become increasingly part of the standard pathology armamentarium. The number and types of addressable molecular imaging targets continue to expand. Immunofluorescence (IF) and immunohistochemistry (IHC) began to have an impact on surgical pathology beginning in the 1970s [1], ushering in the era of true molecular pathology, which has now expanded to include detection of DNA and a variety of RNA species. These tissue-based methods can yield exquisite spatial resolution, giving molecular information down to the subcellular level while preserving spatial context all the way up to the centimeter-scale. They also provide the ability to look at different cell populations simultaneously, providing assurance that a molecular signature being studied really arises in the cells of interest, while permitting appreciation of “field effects” in which anatomically normal tissues adjacent to abnormal regions exhibit molecular abnormalities. Other, non-imaging-based multiplex assays (such as cDNA or proteomics arrays) almost always examine a mélange of tumor and non-tumor tissues, or at the very best, look at the average molecular state of many tumor cells mixed together. Even if an apparently pure tumor cell population is analyzed, perhaps via laser-capture, the fact that it is examined in the aggregate means that subpopulation signatures, if present, will be blurred into the bulk signal [2]. Thus, methods that can work at a single-cell level help ensure that the molecular repertoires of all of a tumor’s heterogeneous populations are properly evaluated. There are at least three drivers to the adoption of multiplexed methods in pathology. The first is a practical one: to the extent that antibody panels assayed on serial sections

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 564–567, 2010. www.springerlink.com

Multispectral Imaging, Image Analysis, and Pathology

could be multiplexed such that several probes could be applied on each slide, this would decrease sample handling demands and simplify the work-flow. This would also decrease the demands placed on scarce samples if multiple stains are required. The second driver would be to apply current multi-molecular phenotyping, especially with cellsurface markers, as is done every day in flow cytometry, to a slide-based approach. Finally, we can anticipate that ongoing research in cancer and molecular biology, particularly employing intracellular systems approaches, will create a need to characterize multiple (signaling) molecules on a per-cell basis. In particular, antibody-based methods, in contrast to RNA or DNAfocused techniques, can determine post-translational modifications, such as phosphorylation and de-phosphorylation, that play integral roles in mediating the activity of signaling networks; such signaling pathways rarely operate independently of one another. For example, signaling by cell surface receptors when they bind cognate growth factor ligands often activates both the RAS/Raf-MEK-ERK and PI3KAKT pathways [3]. Activation of both is necessary for many growth factors, such as EGF, to produce their pleiotropic effects (e.g. cell proliferation, apoptosis resistance, etc.). While such analyses are not yet routine, we can anticipate that relevant molecular assays may eventually become part of the practice of clinical anatomic pathology, especially as they tie in with individualized patient profiling and drug selection. However, despite the large investment that has been made in molecularly targeted therapies in recent years, identification of robust predictors of therapeutic response for individual patients (vs. patient populations) has remained largely elusive. A. Labeling Strategies Molecular imaging typically requires some kind of label to be attached to a specific probe. Typical methods for protein detection include IF and IHC. Fluorescence, in which excited dyes emit signals at characteristic wavelengths, has its proponents who point to qualities such as increased sensitivity, improved dynamic range, suitability for high levels of multiplexing even when signals are overlapped, potential for single-co*cktail labeling approaches, and freedom from enzymatic amplification (and therefore, improved linearity). On the other hand, fluorescence still suffers from interference from various sources of autofluorescence, especially with formalin-fixed tissue [4, 5], more complex and expensive instrumentation requirements, difficulties with interinstrument calibration and quantitation, interference with pathology work-flow, unfamiliar appearance of the sample, which no longer resembles a brightfield, H&E-stained specimen, and so on.

565

Brightfield chromogenic (colored) stains that absorb light at certain wavelengths have their own set of advantages and drawbacks. Most notably, especially when counterstained with hematoxylin, the tissues maintain a familiar appearance, allowing the microscopist to easily determine the tissue-context of a positive molecular signal. The stains are relatively stable and do not require storage in the dark and cold; they can be viewed on any microscope; quantitation of the stain can be instrument-independent when done properly so inter-institutional comparisons can be feasible (this assumes—a big assumption—that the staining procedures and other variables are properly quality-controlled [6]). Disadvantages include a major one—absent spectral imaging, it is difficult at best to resolve multiple overlapping colors and recover even qualitative data from multiplexed chromogenically stained samples. However, with spectral imaging techniques, multiple chromogens can be successfully unmixed [7]. B. Spectral Imaging and Unmixing Techniques Spectral imaging techniques offer to enhance the value of tissue examination, and doing so in ways that are both convenient and robust. In essence, they simply generates a series of images at a number of relatively narrow wavelength bands, typically 10- to 30-nm wide. By slicing the incoming light into these distinct ranges, a user can resolve and quantitate signals that may overlap both spatially and spectrally [8, 9] providing data that cannot typically be extracted from conventional color (RGB) images. The mathematics involved is similar to that used in traditional spectroscopy, with the distinction that here the spectroscopic information is linked, pixel by pixel, with high quality images. As will be discussed below, this combination can be used to help automate quantitative analyses. There are a number of ways to acquire spectral image data, reviewed in [9, 10]. After acquisition, the key task is to partition the overall optical signal at a given pixel correctly into its component species. Linear unmixing algorithms can unmix the data quickly and accurately`, generating individual abundance images for each of the unmixed components`, as well as a “component” image containing and combining all the unmixed species in one multiplane display. Fluorescence-based data are used directly in the unmixing procedure. Brightfield images, which rely on lightabsorbing chromogens rather than light-emitting fluorophores, must first be mathematically converted to optical density. Since chromogen-based quantitation relies on Beer’s law to work properly, any deviation from pure absorption behavior can affect the results. Some chromogens, unfortunately including the popular brown DAB stain, scatter as

IFMBE Proceedings Vol. 32

566

R.M. Levenson

well as absorb light. However, in practice, this does not seem to pose insuperable problems, since linearity and reasonable dynamic range can be achieved using DAB staining [11]. Other chromogens, such as Vector Red, have been shown to display good linearity and dynamic range [12].

III. AN EXAMPLE FROM HEMATOPATHOLOGY Acute leukemias, myeloproliferative diseases and myelodysplastic conditions are three sets of related clinical settings in which accurate assessment of disease activity is essential, part of the standard of care, and used to determine not only prognosis but therapy [13]. Detection of malignant blasts in the marrow can be accomplished using bone marrow aspirates and/or bone marrow biopsies but these methods are presently either difficult, not completely reliable, or both. For aspirates, the problems are connected to issues of sampling, and for the marrow biopsies detection and counting of blasts is currently a subjective and imprecise procedure. Bone marrow biopsies can be preferable to aspiratebased methods since they retain the architecture of the bone marrow environment, preserve the presence and distribution of focal blast accumulations and do not suffer from sampling issues (dry taps or hemodilution). They also allow for estimation of marrow cellularity, and the presence and degree of fibrosis. With simple H&E staining, identification of blasts by their morphology is intrinsically hard, especially due to the destruction of some morphological detail caused by decalcification. Moreover, blast levels are frequently estimated without actual counting, via a gestalt impression (“looks like about 5 %”). Immunophenotyping with chromogenic labels could simplify the task, but there is no single antigen currently identified that is pathognomonic. However, double-labeling could detect blast populations that co-express antigens typically only seen singly in non-blast populations. Figure 1, panels A and B, show an application of multispectral imaging to an immunostained decalcified bone marrow specimen from a patient biopsied after chemotherapy with a goal of identifying double-labeled blasts. The sample was stained for two markers often expressed in blasts [14]: CD34 (with a red chromogen) and c-Kit (with DAB, the commonly used brown chromogen). Both of these markers are also expressed (singly) in normal marrow elements: CD34 on endothelial cells, and c-Kit in mast cells and hematopoietic stem cells [15]. The sample shown was counterstained with hematoxylin, and single-red, single-brown, and double (red+brown) labeled cells were identified using brightfield spectral imaging.

Fig. 1 Multispectral detection of blasts in bone marrow biopsy (A and B), and automated segmentation of bone marrow elements (C and D). See text for further explanation In this example, a spectral dataset was created by collecting images from 440 to 700 nm and unmixing was used to separate the chromogens from each other and from the hematoxylin counterstain using spectral curves shown in the inset in panel A. CD34 signals were unmixed into a red, pseudo-color simulated fluorescence channel, and the c-Kit signals unmixed into green; the hematoxylin signal was concurrently suppressed by unmixing it into black. Note how the unmixed image resembles fluorescence—changing display modes can often be helpful for increasing legibility. Double-stained blasts are indicated by the presence of a yellow (green plus red) signal. The prominent vessel in the center is red, as would be expected for CD34-only staining behavior, and numerous green-only signals are visible, indicating the presence of cells in the mast-cell or granulocyte lineages (or the existence of spectrally similar hemosiderin). A. Regions of Interest (ROIs) Estimation of blast levels requires some “denominator”—it is important to assess the extent of the relevant bone-marrow compartment in which blasts could be present. Regions consisting of bone, clot, and fat are not relevant to this estimation. What would be useful is a way of atuomatically detecting the extent of cellular marrow, and then figuring out how many blasts are present within this compartment. Existing commercial products for quantitative analysis use several approaches for detecting ROIs. The

IFMBE Proceedings Vol. 32

Multispectral Imaging, Image Analysis, and Pathology

first is to have the operator manually outline regions of interest (for example, cancer) to restrict quantitation to the appropriate tissue compartment; the second is to employ one immunostain to identify appropriate regions and then evaluate the expression of another analyte in the defined areas [16]; finally, another possible approach is to use imaging algorithms to define various compartments. One useful version of this relies on machine-vision techniques [17] This approach can be used to create a classifier that can distinguish between cancer, normal tissue, stroma and inflammatory infiltrates, to use one reasonable palette. Training can be extended over multiple examples in order to encompass the variability in the sample set. As shown in Fig. 1, (C and D), an H&E-stained section of bone marrow (C) can be separated (D) via machinelearning based algorithms into bone, fat, and clot (pink) and cellular marrow elements (green). The final step in the analysis would then be to measure the area occupied by the blast population, and then divide that number by the area of true marrow elements to arrive at a normalized estimate of percentage of marrow occupied by neoplastic cells (in this case, it was about 15%). The ability to perform such quantitative analysis could provide accurate, objective and reliable assessments of patients’ clinical status.

IV. CONCLUSIONS Novel imaging and analysis capabilities can provide pathology with many of the tools it needs to generate the information now being requested. Prognosis, therapy selection and therapeutic monitoring will in many instances involve determinations of multiple analytes in unhom*ogenized, spatially intact tissue specimens, with resolution sufficient to measure expression in individual subcellular compartments. None of today’s competing phenotyping technologies (expression arrays, serum proteomics, in-vivo imaging, e.g.) can provide comparable spatial and molecular precision.

ACKNOWLEDGMENT I would like to acknowledge my former colleagues at Cambridge Research and Instrumentation, and Drs. Massimo Loda and Alessandro Fornari for assistance in preparation of this manuscript. Samples were kindly provided by Dr. Raul Braylan, University of Florida. The work was funded in part through support from the NIH via grants BRP 5R01CA108468 and SBIR 2R43CA088684.

567

REFERENCES 1. Taylor C R, Cote R J. 1997. Immunohistochemical markers of prognostic value in surgical pathology. Histol Histopathol 12: 1039-55 2. Banks R E, Dunn M J, Forbes M A, et al. 1999. The potential use of laser capture microdissection to selectively obtain distinct populations of cells for proteomic analysis-- preliminary findings. Electrophoresis 20: 689-700 3. Lugli A, Zlobec I, Minoo P, et al. 2006. Role of the mitogen-activated protein kinase and phosphoinositide 3-kinase/akt pathways downstream molecules, phosphorylated extracellular signal-regulated kinase, and phosphorylated akt in colorectal cancer-a tissue microarray-based approach. Hum Pathol 37: 1022-31 4. Mansfield J R, Gossage K W, Hoyt C, et al. 2005. Autofluorescence removal, multiplexing, and automated analysis methods for in-vivo fluorescence imaging. J Biomed Opt 10: 41207 5. Levenson R M, Mansfield J R. 2006. Multispectral imaging in biology and medicine: Slices of life. Cytometry A 69: 748-58 6. Taylor C R, Levenson R M. 2006. Quantification of immunohistochemistry--issues concerning methods, utility and semiquantitative assessment ii. Histopathology 49: 411-24 7. Levenson R M. 2006. Spectral imaging perspective on cytomics. Cytometry A 69: 592-600 8. Farkas D L, Du C, Fisher G W, et al. 1998. Non-invasive image acquisition and advanced processing in optical bioimaging. Comput Med Imaging Graph 22: 89-102 9. Garini Y, Young I T, McNamara G. 2006. Spectral imaging: Principles and applications. Cytometry A 69: 735-47 10. Bearman G, Levenson R. 2003. Biological imaging spectroscopy. In Biomedical photonics handbook, ed. T Vo-Dinh, pp. 8_1-8_26. Boca Raton: CRC Press 11. Matkowskyj K A, Cox R, Jensen R T, et al. 2003. Quantitative immunohistochemistry by measuring cumulative signal strength accurately measures receptor number. J Histochem Cytochem 51: 205-14 12. Ermert L, Hocke A C, Duncker H R, et al. 2001. Comparison of different detection methods in quantitative microdensitometry. Am J Pathol 158: 407-17 13. Sebban C, Browman G P, Lepage E, et al. 1995. Prognostic value of early response to chemotherapy assessed by the day 15 bone marrow aspiration in adult acute lymphoblastic leukemia: A prospective analysis of 437 cases and its application for designing induction chemotherapy trials. Leuk Res 19: 861-8 14. Oertel J, Oertel B, Schleicher J, et al. 1996. Immunotyping of blasts in human bone marrow. Ann Hematol 72: 125-9 15. Miettinen M, Lasota J. 2005. Kit (cd117): A review on expression in normal and neoplastic tissues, and mutations and their clinicopathologic correlation. Appl Immunohistochem Mol Morphol 13: 20520 16. Camp R L, Chung G G, Rimm D L. 2002. Automated subcellular localization and quantification of protein expression in tissue microarrays. Nat Med 8: 1323-7 17. Levenson R. 2008. Putting the "More" Back in morphology: Spectral imaging and image analysis in the service of pathology. Arch Pathol Lab Med 132: 748-57 Author: Institute: Street: City: Country: Email:

IFMBE Proceedings Vol. 32

Richard Levenson Brighton Consulting Group 52 Greycliff Rd. Brighton, MA 02135 US [emailprotected]

Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection H. Ben Hsieh1, George Somlo2, Robyn Bennis1, Paul Frankel2, Robert T. Krivacic1, Sean Lau2, Janey Ly1, Erich Schwartz3, and Richard H. Bruce1 1

Palo Alto Research Center/Biomedical Engineering, Palo Alto, CA 2 City of Hope Cancer Center/Medical Oncology, Duarte, CA 3 Stanford University/Department of Medicine, Stanford, CA

Abstract— For metastatic disease, biomarker profiling of distant metastases is done only when feasible because biopsy of metastases is invasive and associated with potential morbidity without proven benefit. So although biomarker expression may differ in distant metastases, treatment with targeted therapies is almost always based on biomarker targets derived from a patient’s primary breast tumor, usually excised years before development of metastatic disease. This work addresses measurement of biomarker expression on circulating tumor cells (CTCs) as a source of current biomarker expression. CTCs are rapidly located on a planar substrate with a sensitive detection instrument using Fiber Array Scanning Technology. The instrument targets abundant cytokeratins rather than EpCAM. The assay includes quantitative measurement of expression levels of 3 breast cancer markers (HER2, ER and ERCC1) that predict efficacy of treatment. We have observed high discordance rates in cancer markers between CTC and tissue. Multiplex testing may allow for personalized therapy for patients. Keywords— circulating tumor cells.

I. INTRODUCTION In recent clinical trials, detection of CTCs provided prognostically useful information regarding progressionfree and overall survival [1] and treatment efficacy [2] in a subset of patients. However, enumeration does not provide information for choosing the optimal therapy. The biological characteristics of CTCs differ from the primary tumor and change during disease progression. The level of this discordance has been reported to be substantial with HER2-positive CTCs observed in up to 50% of patients with breast cancer, whose primary tumor was HER2 negative [3]. Because CTCs can provide a different biological characterization of the disease, their phenotype could be important for prediction of therapeutic response. The estimated frequency of CTCs in blood is in the range of one tumor cell per 10 6–7 WBCs (1-10 CTCs/ml). At such low concentrations, reliable identification of these cells is a huge technical challenge. Solutions developed to overcome

this problem focus on enrichment of CTCs [4-6] to reduce sample size. While enrichment protocols are extremely effective at enriching the proportion of analyzable cells (both nucleated hematopoietic cells and rare CTCs), these methods can result in considerable cell loss or cell damage [7]. On the other hand, an automated digital microscope (ADM) can provide high levels of sensitivity and minimal cell damage, but the analysis of a meaningful sample size is still prohibitively long for a clinical assay. Another major barrier of the reliable identification of CTCs stems from their extreme biological heterogeneity. This heterogeneity is exhibited in a wide range of genetic, biochemical, immunological and biological characteristics, such as cell surface receptors, enzymes, karyotypes, cell morphologies, growth properties, sensitivities to various agents and ability to invade and produce metastasis. Therefore, sample preparation protocols and detection methods need to comprehend this heterogeneity. We have previously shown a novel approach that uses fiber-optic array scanning technology (FAST) to address the rare-cell detection problem [8]. With FAST cytometry, laser-printing optics are used to excite 300,000 cells/sec, and fluorescence emission is collected in an array of optical fibers that forms a wide collection aperture. We demonstrated that with its extremely wide field-of-view (FOV), the FAST cytometer can locate CTCs at a rate that is 500 times faster than an ADM, the current gold-standard method of automated CTC detection. We provided experimental evidence that the FAST cytometer can achieve this detection speed with comparable sensitivity and improved specificity. Because of this high scan rate, it requires no additional processing or enrichment of CTCs that could result in reduced sensitivity from cell loss. In addition, unlike alternative techniques for CTC detection such as PCR or flow cytometry, FAST cytometry enables the cytomorphology of the prospective rare cells to be readily examined. The processing and staining protocols used in the FAST assay were designed to preserve morphology and enable multi-marker characterization of target cells [9].

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 568–571, 2010. www.springerlink.com

Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection

II. MATERIALS AND METHODS A. Cell Attachment and CTC Identification Blood samples were processed using a previously described method [10] with some minor modifications. Briefly, 10 mL blood samples were drawn into Cytochex Cell Free DNA BCT tubes (Cat. #: 218962, Streck Inc., Omaha, NE), shipped overnight and processed within 24 hr. Samples were subjected to erythrocyte lysis according to our previously described protocol. The remaining cell pellet was then washed, re-suspended in phosphate buffered saline (PBS), and then plated on custom designed (12.7x7.6 cm slide with an active area of 64 cm2 ) adhesive glass substrates (Paul Marienfeld GmbH & Co., KG, Bad Mergentheim, Germany). The cells were then incubated for 40 minutes at 37ºC. After incubation, excess liquid is decanted and slides are fixed with 2mL 2% paraformadehyde at room temperature for 10 min. Slides are then rinsed in PBS twice, submerged in cold acetone in -20ºC and rinsed in PBS again. Slides are then blocked with a buffer containing 20% human AB serum (Sigma, H4522) in PBS at 37º C for 20 minutes. Primary antibodies used in this study were mouse anti-human CD45 IgG2a (MCA87, AbD Serotec, Raleigh, NC) directly conjugated with Qdot 705 (Invitrogen custom conjugation), a co*cktail of mouse monoclonal anticytokeratin antibodies for cytokeratin classes 1, 4, 5 , 6, 8, 10, 13, 18 and 19 (C2562, Sigma), and mouse monoclonal anti-cytokeratin 19 antibody (RCK108, DAKO). In order to detect cells that have very weak CK expression while minimizing nonspecific binding, we perform tertiary antibody amplification. The secondary antibody for CK is biotin-XX goat anti-mouse IgG1 (A10519, Invitrogen) streptavidin Alexa555 tertiary (S-32355, Invitrogen). We found on average the signals are 2 to 4X stronger using tertiary with no impact on noise. The cell nucleus is counterstained with DAPI (0.5 μg/ml 4’, 6-diamidino-2-phenylindole, D-21490, Invitrogen) and coverslip is mounted with Live Cell mounting medium (0.25g n-propyl gallate and 0.13g Tris-HCl in 4mL ddH2O, add 36mL glycerol and heat to dissolve). CTCs were identified by their morphology and immunophenotype (CK+, CD45-, and DAPI +) as described previously (10). For cancer marker labeling, secondary antibodies are matched to the primary antibody probes in order to insure sufficient signal for quantification. Antibodies derived from different immunoglobulin G isotype subclasses or different species were used for simultaneous staining without cross reactivity. These antibodies are pre-absorbed against the other IgG subclasses, the other immunoglobulin classes, or the serum of other species to minimize cross-reactivity.

569

The multiplex assay for characterization of breast CTCs includes three additional markers to measure expression levels of Her2 (membrane receptor), ER (nuclear staining) and excision repair cross-complementation group 1 (ERCC1), a marker of DNA repair (nuclear staining) . The primary antibody against the Her2 (erbB2, chicken antihuman, ProSci Inc. cat. #:ab14027) was followed by a Qdot655 conjugated goat anti-chicken secondary antibody (Q14421MP, Invitrogen). The primary antibody against the estrogen receptor (ER-α, monoclonal rabbit anti-human, LabVision cat.#:RM-9101) was followed by Alexa750 tagged goat anti-rabbit secondary antibody (A-21039, Invitrogen). The primary antibody against ERCC1 (mouse antihuman IgG2b, sc-17809, Santa Cruz Biotech) was followed by Alexa647 tagged goat anti-mouse IgG2b secondary antibody (A-21242, Invitrogen). B. FAST Optical System The FAST scanner scans samples at a rate of 25M cells min-1 [8]. A laser raster enables the fast scan rate (100 lines sec-1). An Argon ion laser using 4 mW output excites fluorescence in labeled cells, and this emission is collected in optics with a large (76 mm) field-of-view. This field-ofview is enabled by an optical fiber bundle with asymmetric ends. The numerical aperture (NA) of the FAST scanner is 0.65 and is determined by the index of refraction of the borosilicate fibers (1.51) used for fluorescence collection. The resolution of the scanning system (10 μm) is determined by the spot size of the scanning laser. The emission from the fluorescent probes is filtered using standard dichroic filters before detection in a photomultiplier. A polygon laser scanner produces a laser scan speed of 10 m/sec. The sample is moved orthogonally across the laser scan path on a microscope stage at a rate of 3 mm sec-1. Fluorescent objects are located with an accuracy of 40 μm relative to alignment marks on the substrate. C. Sensitivity Testing To test the inherent sensitivity, we prepared samples by spiking 2 to 50 HT29 cells into 1 mL of whole blood. Samples were first scanned by the FAST, which located objects labeled with Alexa 555. These were imaged at 20x resolution by the ADM for identification. The samples were subsequently scanned in the ADM using a 4x resolution objective with a low numerical aperture (NA=0.2) by stepping the effective field-of-view (3.4 mm2) across the sample. The HT29 cells have sufficient intensity to be easily detected by the 4x objective. Image analysis located areas of fluorescence above the background, and these were subsequently scanned with the same 20x

IFMBE Proceedings Vol. 32

570

H.B. Hsieh et al.

objective used to image the objects located by the FAST cytometer. The images were analyzed by trained personnel. D. Sample Scoring To score cancer marker expression in patient samples, we have adopted a methodology from tissue analysis that combines expression level and the percentage of expressing cells in the sample. The expression level is scored relative to a moderate expressing cell line for each marker that is processed with the sample. A CTC with an expression level within the 34th quantile of the median of the cell line control is scored a 2 while CTCs expressing higher levels are scored a 3. CTC expression levels lower than the cell line but higher than background are scored a 1. For the breast cancer markers described here (HER2, ER, ERCC1), leukocyte expression is used for the background. The cell line controls used are MDA-Mb-453 for HER2, T-47D for ER and A-549 for ERCC1. The percent population is scored linearly on a 10 point scale with 0 for less than 10% expressing CTCs and 1 for 10% to 20%, up to 10 for populations between 90% and 100%. The sample score is product of the average expression and the population score.

III. RESULTS A. Sensitivity A comparison of FAST detection sensitivity to that of an ADM shows identical sensitivities. In the test, we varied the number of HT29 cells spiked into 1 ml of blood by nearly two orders of magnitude. We detected a total of 238 cells in 13 samples. Using CK as a target, each scan by the FAST cytometer located exactly the same number of cells located by the ADM. B. Specificity In CTC detection, the three main sources of false positives located by FAST are autofluorescent particles, labeled cellular debris, and dye aggregates. Based on our observations, the vast majority of false positive detections originate from autofluorescing particles. These generally fluoresce broadly, and the fluorescence intensity diminishes as the magnitude of the Stokes shift, the wavelength shift of the emission from the excitation, increases. We use a wavelength comparison technique to filter away a substantial number of autofluorescing particles. For this we measure emissions at two different wavelengths. The CK probe is selected to have an emission wavelength (580 nm) that has a relatively large separation in

wavelength (95nm) from the excitation wavelength (488 nm). By comparing the emissions at an intermediate wavelength (525 nm), false positives can be identified as having a relatively higher intensity at 525 nm than at 580 nm, while the objects with probes have a relatively higher intensity at 580 nm. The ratio of the two wavelengths is used to eliminate the autofluorescing particles. False positive cells originating from dye aggregates and cell fragments are successfully eliminated with appropriate filtering for object size and brightness. With the current filtering algorithms, 99.8% of the false positives are eliminated without loss of sensitivity. The typical specificity for a FAST scan is around 3x10-6. This means that only 150 false positives are found in a sample containing 50 million WBCs. C. Patient Results The assay includes quantitative measurement of expression of 3 breast cancer markers (HER2, ER and ERCC1) that may predict efficacy for specific therapies in addition to markers needed for CTC identification (CK, DAPI and CD45). We observed high discordance rates in cancer markers between CTC and tissue characterization. For determining marker status for primary tissue, conventional clinical scoring was used for HER2 and ER, and the median score was used for ERCC1 [11]. For CTCs the sample status for each marker was determined from the score using a cutoff score that minimizes discordance. Only patients with 5 or more CTC were used for biomarker analysis. For HER2 expression, 18 patients with MBC were analyzed, and the observed discordance was 28%. The discordance rate for ER expression in 14 patients was 36%. The discordance rate for ERCC1 expression in 13 patients was 38%.

IV. DISCUSSION We presume that improvements on CTC measurement can be achieved if both sample preparation and detection technique are taken into account simultaneously. The sample preparation process described here was developed to preserve both the maximum number and forms of available CTCs. This was accomplished using only minimal blood preparation and omitting antigen or density-dependent enrichment methods. With this minimal pre-analytical processing approach, the burden is placed on the detection instrument to scan the remaining large number of prospective cells with high sensitivity and specificity. While the FAST cytometer is capable of scanning fast enough to screen 50 million nucleated cells without enrichment, the subsequent ADM imaging becomes a

IFMBE Proceedings Vol. 32

Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection

limitation when the false positive rate exceeds a few hundred. We expect that the use of automation with further process optimization should reduce the false positive levels.

571

expression patterns on CTCs and primary tumor is observed. Multiplex testing may allow for personalized therapy for patients with metastatic breast cancer.

ACKNOWLEDGMENT The work was supported by funding from the National Cancer Institute.

REFERENCES

Fig. 1 Images of cancer marker labeling. Original cells in top row with CK (red) and nucleus (blue). Bottom row shows markers (green) from left to right for ERCC1, HER2, and ER While several other studies address CTC characterization, our approach enables the preservation of cellular morphology together with a number of simultaneously detected markers. The detailed cytomorphological and immunophenotypical characterization that is enabled by imaging undistorted cells on a planar surface is relevant not only for CTC identification and characterization. High fidelity images enable the incorporation of marker localization in assessing expression levels. For example, HER2 is localized to the membrane while ERCC1 and ER are localized to the nucleus as shown in Fig. 1. The use of localization improves the specificity of the CTC identification and assessment of the cancer marker expression level by reducing the inclusion of nonspecific binding. While discordance between the cancer marker status in CTCs and tissue is comparable to early results of others, these reported levels of discordance vary over a considerable range. Although some of this variation could be due to small sample sizes, it is likely that the variation is also derived from the CTC detection methodology as well as the approach to quantifying the expression level. In addition, the cutoff score that differentiates positive and negative marker status on CTCs could well be different from that empirically determined for tissue. The cutoff score for CTCs will need to be determined from patient outcome.

V. CONCLUSIONS

1. Cristofanilli M, Budd GT, Ellis MJ, et al. (2004) Circulating tumor cells, disease progression, and survival in metastatic breast cancer. N Engl J Med 2004 351:781-91. 2. Budd GT, Cristofanilli M, Ellis MJ, et al. (2006) Circulating tumor cells versus imaging--predicting overall survival in metastatic breast cancer. Clin Cancer Res 12:6403-9. 3. Wülfing P, Borchard J, Buerger H, et. al. (2006) HER2-positive circulating tumor cells indicate poor clinical outcome in stage I to III breast cancer patients. Clin Cancer Res 12(6)1715-20. 4. Vona G, Sabile A, Louha M, et al. (2000) Isolation by size of epithelial tumor cells : a new method for the immunomorphological and molecular characterization of circulating tumor cells. Am J Pathol 156:57-63. 5. Martin VM, Siewert C, Scharl A, et al. (1998) Immunomagnetic enrichment of disseminated epithelial tumor cells from peripheral blood by MACS. Exp Hematol 26:252-64 6. Mahaeswaran, S, Sequist LV, Nagrath S, et al. (2008) Detection of mutations in EGFR in circulating lung-cancer cells. N Engl J Med. 359(4):366-77 7. Goeminne JC, Guillaume T, Symann M. (2000) Pitfalls in the detection of disseminated non-hematological tumor cells. Ann Oncol 1:785-92. 8. Krivacic RT, Ladanyi A, Curry DN, et al. (2004) A rare-cell detector for cancer. PNAS 101:10501-4. 9. Marrinucci D, Bethel K, Bruce RH, et al. (2007) Case study of the morphologic variation of circulating tumor cells. Hum Pathol 38:5149. 10. Hsieh HB, Marrinucci D, Bethel K, et al. (2006) High speed detection of circulating tumor cells. Biosens Bioelectron 21:1893-9. 11. Olaussen, KA, Dunant, A, Fouret MD, et. al. (2006) DNA repair by ERCC1 in non-small-cell lung cancer and cisplatin=based adjuvant chemotherapy, NEJM, 355:983-991.

Author: Institute: Street: City: Country: Email:

Detecting multiple markers in CTCs from patients with MBC is feasible, and significant discordance between

IFMBE Proceedings Vol. 32

Richard Bruce Palo Alto Research Center 3333 Coyote Hill Rd Palo Alto, CA United States [emailprotected]

Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays J. Cuiffi1, R. Soong2, S. Manolakos1, S. Mohapatra3, and D. Larson2 1

Draper Laboratory – Bioengineering Center at USF, Tampa, USA 2 Draper Laboratory, Cambridge, USA 3 University of South Florida, Department of Molecular Medicine, Tampa, USA Abstract— We present a review of current implementations of nanohole array sensor technology and discuss future trends for this technique applied to multiplexed, label-free protein binding assays. Nanohole array techniques are similar to surface plasmon resonance (SPR) techniques in that local refractive index changes at the sensor surface, correlated to protein binding events, are probed and detected optically. Nanohole array sensing differs by use of a transmission based mode of optical detection, extraordinary optical transmission (EOT) that eliminates the need for prism coupling to the surface and provides high spatial and temporal resolution for chip-based assays. This enables nanohole array sensor technology to combine the real time label-free analysis of SPR with the multiplexed assay format of protein microarrays. Various implementations and configurations of nanohole array sensing have been demonstrated, but the use of this technology for specific research or commercial applications has yet to be realized. In this review, we discuss the potential applications of nanohole sensor array technology and the impact of that each application has on nanohole array sensor, instrument and assay design. A specific example presented is a multiplexed biomarker assay for metastatic melanoma, which focuses on biomarker specificity in human serum and ultimate levels of detection. This example demonstrates strategies for chip layout and the integration of microfluidic channels to take advantage of the high spatial resolution achievable with this technique. Finally, we evaluate the potential of nanohole array sensor technology against current trends in SPR and protein micro-arrays, providing direction towards development of this tool to fill unmet needs in protein analysis. Keywords— SPR, extraordinary optical transmission, nanohole array sensor, label-free detection, protein microarray.

I. INTRODUCTION Nanohole array sensor technology is a promising approach for highly multiplexed label-free protein binding assays. High throughput protein interaction analysis has proven difficult to implement in comparison to highly successful DNA microarray technology[1-5], and many factors contribute to this fact. Proteins are unstable, both in chemistry and conformation in comparison to nucleic acids, and require specific orientation when attached to a surface. Proteins interact with a variety of molecular species including small molecules,

nucleic acids, and other proteins. The capture species, especially other proteins such as antibodies, are more difficult to synthesize compared to DNA capture probes. Labels are required for typical optical microarray imaging techniques, which may interfere with species interactions. Finally, not only are absolute and relative protein concentrations often desired, but also protein interaction kinetics. Although the nature of proteins and their interactions cannot be changed, label-free binding assays offer an approach to determining protein concentrations and kinetics without interference from molecular tags[6-8]. Surface plasmon resonance (SPR) techniques have proven to be the modern label-free standard for protein kinetics assays[9-14]. SPR does not however, offer ease of integration with highly multiplexed techniques such as protein microarrays or the limit of detection (LoD) comparable to labeled techniques such as enzyme linked immunosorbent assays (ELISA) [6,8,13,15]. Recent advances in nanohole array sensor technology have demonstrated promise to achieving high density label-free kinetic measurements coupled with the potential for improved LoD over SPR[16-21].

II. TECHNOLOGY REVIEW A. Nanohole Arrays Sensor vs. SPR SPR techniques operate by measuring local index of refraction changes of a liquid (or gas) solution on a metal surface. The principle of operation is shown schematically in Fig 1a. Light is coupled to surface plasmons in the metal with a prism or grating and the reflected light is analyzed. The surface plasmons are sensitive to the local index of refraction and alter the coupled/reflected wave, offering a detection mechanism through changes in coupling angle, coupling wavelength, reflected light intensity or reflected light phase[13]. In a typical protein binding experiment a detection molecule (e.g. antibody) is fixed to the surface. The interacting molecule of choice (e.g. antigen) is then perfused across the surface, and real-time interaction assessments are made as molecules bind near (within ~200nm) the surface[22].

K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 572–575, 2010. www.springerlink.com

Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays a.

Protein

Flow

Antibody

Metal film

Prism Incident light

Reflected light

b. Incident light

Nanohole array

Metal film Transmitted light

Fig. 1 a) Schematic of SPR operation showing an example of a protein antibody capture system, b) schematic of nanohole array sensor operation showing 4 nanohole array sensors. Drawings are not to scale Nanohole array sensor technology is similar in that local index of refraction changes on a metal surface correlate to optical changes. In this technique however, a transmission mode (instead of reflected) of optical coupling is used as shown in Figure 1b. The transmitted light passes through arrays of holes in the metal film, where the holes are substantially smaller than the incident wavelength of light, by coupling with local surface plasmons[23]. This transmission mode, called extraordinary optical transmission (EOT), is an unexpected phenomenon [24] and has only been recently applied to monitoring molecular binding events[25,26]. As detailed below, nanohole sensor array technology offers unique advantages over SPR, combining real-time temporal resolution with a spatial resolution beyond that of modern microarrays. B. Instrumentation and Sensor Chip Design Nanohole array technology, making use of EOT, eliminates the need for prisms or optical grating as in SPR. This simplifies the optics instrumentation, allowing for improved multiplexing. Traditional prism coupled SPR has been limited to small numbers of parallel sensors (

26th Southern Biomedical Engineering Conference SBEC 2010 April 30 - May 2, 2010 College Park, Maryland, USA - PDF Free Download (2024)
Top Articles
Latest Posts
Article information

Author: Ms. Lucile Johns

Last Updated:

Views: 6437

Rating: 4 / 5 (61 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Ms. Lucile Johns

Birthday: 1999-11-16

Address: Suite 237 56046 Walsh Coves, West Enid, VT 46557

Phone: +59115435987187

Job: Education Supervisor

Hobby: Genealogy, Stone skipping, Skydiving, Nordic skating, Couponing, Coloring, Gardening

Introduction: My name is Ms. Lucile Johns, I am a successful, friendly, friendly, homely, adventurous, handsome, delightful person who loves writing and wants to share my knowledge and understanding with you.