153

Vol.1 N.2 - Journal of Aerospace Technology and Management

Embed Size (px)

DESCRIPTION

Journal of Aerospace Technology and Management (JATM) is a techno-scientific publication serialized, published by Departamento de Ciência e Tecnologia Aeroespacial (DCTA) and aims to serve the international aerospace community. It contains articles that have been selected by an Editorial Committee composed of researchers and technologists from the scientific community. The journal is quarterly published, and its main objective is to provide an archival form of presenting scientific and technological research results related to the aerospace field, as well as promote an additional source of diffusion and interaction, providing public access to all of its contents, following the principle of making free access to research and generate a greater global exchange of knowledge.

Citation preview

Page 1: Vol.1 N.2 - Journal of Aerospace Technology and Management

2009

2Jul/Dec.

Page 2: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 129

Journal of Aerospace Technology and ManagementJ. Aerosp. Technol. Manag.

Vol. 1, Nº. 2, Jul.– Dec. 2009

EDITOR IN CHIEFFrancisco Cristóvão Lourenço de Melo

Institute of Aeronautics and Space (IAE) São José dos Campos, SP, Brazil

[email protected]

ASSOCIATE EDITORSAdriana Medeiros Gama

Ana Cristina AvelarAntonio Pascoal Del’ Arco Junior

Carlos de Moura Neto Cynthia Cristina Martins Junqueira

Elizabeth da Costa Mattos João Luiz Filgueiras Azevedo

Jorge Carlos Narciso Dutra Roberto Roma de Vasconcelos

Vinícius André Rodrigues Henriques Waldemar de Castro Leite

EDITORIAL PRODUCTIONAna Cristina Camargo Sant’Anna

Glauco da SilvaHelena Prado de Amorim Silva

Márcia Maria Ernandes Robles FracassoRosilene Aparecida Rosário de Souza

PROOFREADINGB.V. Young and J.Lyon

English Consulting

EXECUTIVE EDITORAna Marlene Freitas de Morais

Institute of Aeronautics and Space (IAE) São José dos Campos, SP, Brazil

[email protected]

EDITORIAL BOARDAdam S. Cumming – Defence Science and Technology Laboratory – Fort Halstead – UKAdriano Gonçalves – Institute of Aeronautics and Space – São José dos Campos – Brazil

Adolfo Gomes Marto – Institute of Aeronautics and Space – São José dos Campos – Brazil Alberto W. S. Mello Jr. – Institute of Aeronautics and Space – São José dos Campos – Brazil

Alexandre Garcia – Institute of Aeronautics and Space – São José dos Campos – Brazil Antonio Sérgio Bezerra Sombra – Federal University of Ceará – Fortaleza – Brazil

Cristina Moniz A. Lopes – Institute of Aeronautics and Space – São José dos Campos – BrazilEdílson A. Camargo – Institute of Aeronautics and Space – São José dos Campos – Brazil

Edson Cocchieri Botelho – São Paulo State University – Guaratinguetá – Brazil Edvaldo Simões da Fonseca Jr. – University of São Paulo – São Paulo – Brazil

Emerson Sarmento Gonçalves – Institute of Aeronautics and Space – São José dos Campos – BrazilEvandro Nahara – University of Taubaté – Taubaté – Brazil

Ézio Castejon Garcia – Technological Institute of Aeronautics – São José dos Campos – Brazil Flamínio Levi Neto – Federal University of Brasília – Brasília – Brazil

Page 3: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009130

Francisco Dias Rocamora Jr. – Institute for Advanced Studies – São José dos Campos – BrazilFrancisco Piorino Neto – Institute of Aeronautics and Space – São José dos Campos – Brazil

Gilberto Fisch – Institute of Aeronautics and Space – São José dos Campos – Brazil Gilson da Silva – National Industrial Property Institute – Rio de Janeiro – Brazil

Hazin Ali Al Quresh – Federal University of Santa Catarina – Florianópolis – BrazilHugo Enrique Hernández Figueroa – State University of Campinas – Campinas – Brazil

Inácio Malmonge Martin – University of Taubaté – Taubaté – Brazil João Francisco Galera Monico – São Paulo State University – São Paulo – Brazil

João Marcos Travassos Romano – State University of Campinas – Campinas – Brazil Johannes Quaas – Max Planck Institute for Meteorology – Hamburg – Germany

José Atílio Fritz Rocco – Technological Institute of Aeronautics – São José dos Campos – Brazil José Carlos Góis – University of Coimbra – Coimbra – Portugal

José H. Sousa Damiani – Technological Institute of Aeronautics – São José dos Campos – BrazilLigia M. Souto Vieira – Technological Institute of Aeronautics – São José dos Campos – Brazil

Liu Yao Chao – University of Vale do Paraíba – São José dos Campos – BrazilLuciene Dias Villar – Institute of Aeronautics and Space – São José dos Campos – Brazil

Luis Augusto T. Machado – National Institute for Space Research – Cachoeira Paulista – Brazil Luis Cláudio Rezende – Institute of Aeronautics and Space – São José dos Campos – Brazil

Luiz Alberto de Andrade – Institute of Aeronautics and Space – São José dos Campos – BrazilLuiz Antonio Pessan – Federal University of São Carlos – São Carlos – Brazil

Luiz Claudio Pardini – Institute of Aeronautics and Space – São José dos Campos – Brazil Márcio S. Luz – Department of Aerospace Science and Technology – São José dos Campos – Brazil

M. Filomena F. Ricco – Department of Aerospace Science and Technology – São José dos Campos – Brazil Marisa Roberto – Technological Institute of Aeronautics – São José dos Campos – Brazil

Michelle Leali Costa – São Paulo State University – Guaratinguetá – BrazilMiguel J. R. Barboza – Engeneering Schol of Lorena – Lorena – Brazil

Miguel Beltrame Jr. – University of Vale do Paraíba – – São José dos Campos – BrazilMirabel Cerqueira Resende – Institute of Aeronautics and Space – São José dos Campos – Brazil

Miriam Kasumi Hwang – Institute of Aeronautics and Space – São José dos Campos – BrazilNicolau A.S. Rodrigues – Institute for Advanced Studies – São José dos Campos – Brazil

Paulo Gilberto de Paula Toro – Institute for Advanced Studies – São José dos Campos – BrazilRita de Cássia L. Dutra – Institute of Aeronautics and Space – São José dos Campos – Brazil

Roberto Costa Lima – Naval Research Institute – Rio de Janeiro – BrazilSamuel Machado Leal da Silva – Army Technological Center – Rio de Janeiro – Brazil

Sérgio Henrique da Silva Carneiro – Brazilian Air Force – Brasília – BrazilSilvana Navarro Cassu – Institute of Aeronautics and Space – São José dos Campos – BrazilTakashi Yoneyama – Technological Institute of Aeronautics – São José dos Campos – Brazil

Ulrich Teipel – University of Nuremberg – Nuremberg – Germany Vera Lúcia Lourenço – Institute of Aeronautics and Space – São José dos Campos – Brazil

Wim P.C. de Klerk – TNO Defence – Rijswijk – The Netherlands

Page 4: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 131

CONTENTS

EDITORIAL

133 40 Years of a dreamSilva, O.

TECHNICAL PAPERS

135 An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculationsAzevedo, J. F. L., Korzenowski, H.

153 ADN – The new oxidizer around the corner for an environmentally friendly smokeless propellantNagamachi, M. Y., Oliveira, J. I. S., Kawamoto, A. P., Dutra, R. C. L.

161 New trends in advanced high energy materialsCumming, A. S.

167 Determination of polymer content in energetic materials by FT-IRMattos, E. C., Diniz, M. F., Nakamura, N. M., Dutra, R. C. L.

177 Síntese e caracterização por espectroscopia no infravermelho de agente de ligação à base de hidantoína, utilizado em propelentes compósitosSynthesis and characterization by infrared spectroscopy of hydantoin-based bonding agents, used in composite propellantsPires, D. C., Stockler-Pinto, D. V. B., Sciamareli, J., Costa, J. R., Diniz, M. F., Iha, K., Dutra, R. C. L.

185 Simulação computacional da bobinagem filamentar não-geodésica de vaso de pressão de motor fogueteComputational simulation of non-geodesic filament winding of pressure vessel of rocket motorHeitkoetter, R. F., Almeida, S. F. M., Costa, L. E. V. L.

193 Performance evaluation of GPS receiver under equatorial scintillationMoraes, A. O., Perrella, W. J.

201 Reliability prediction for structures under cyclic loads and recurring inspectionsMello Jr., A. W. S., Mattos, D. F. V.

211 Radiosounding-derived convective parameters for the Alcântara Launch CenterOliveira, F. P., Oyama, M. D.

217 Fatigue behaviour study on repaired aramid fiber/epoxy compositesBotelho, E. C., Mazur, R. L., Costa, M. L., Cândido, G. M., Rezende, M. C.

223 Study of plasticizer diffusion in a solid rocket motor´s bondlineLibardi, J., Ravagnani, S. P., Morais, A. M. F., Cardoso, A. R.

Journal of Aerospace Technology and ManagementVol. 01, N. 02, Jul. - Dec. 2009

ISSN 1984-9648ISSN 2175-9146 (on line)

Page 5: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009132

231 Processamento de compósitos termoestruturais de carbono reforçado com fibras de carbonoProcessing of thermo-structural carbon-fiber reinforced carbon compositesPardini, L. C., Gonçalves, A.

243 Avaliação da resposta de um contador do tipo “Long-Counter” para nêutrons do 241Am/BeEvaluation of the response of a “Long-Counter” for 241Am/Be nêutronsFederico, C. A., Oliveira, W. A., Pereira, M. A., Gonçalves, O. L.

247 Evaluation of the impact of convolution masks on algorithm to supervise scenery changes at space vehicle integration pads Bizarria, F. C., Barbosa, S. A., Bizarria, J. W. P., Rosário, J. M.

255 Comportamento eletromagnético de materiais absorvedores de micro-ondas baseados em hexaferrita de Ca modificada com íons CoTi e dopada com LaElectromagnetic behavior of radar absorbing materials based on Ca hexaferrite modified with CoTi ions and doped with LaSilva, V.A., Pereira, J. J., Nohara, E. L., Rezende, M. C.

265 The management of knowledge and technologies in a space programAlves, M. C. B., Morais, A. M. F.

THESIS ABSTRACTS

273 ELICERE: The elicitation process for dependability goals in critical computer systems – A case study for space applicationLahoz, C. H. N.

273 Variability management in software product lines using adaptive object and reflectionBurgareli, L. A.

273 Complex permittivity and permeability behaviors, 2-18GHz, of RAM based on carbonyl iron and MnZn ferriteGama, A. M.

274 Investigation of the distribution of the film cooling for the liquid rocket engine – LRE with 75 kN thrustSilva, L. A.

274 Adjusting the vertical profile of wind data obtained from anemometric tower and radiossounding in the “Alcantara Launch Center”Leão, R. C.

275 Radar absorbing materials based on thin films processed by physical vapor deposition techniqueSoethe, V. L.

275 Synthesis, doping and characterization of furfuryl alcohol resin and phenol-furfuryl alcohol resin aimed at the optimization of glass-like carbon processingOishi, S. S.

Page 6: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 133

Ozires Silva*Provost of UNIMONTE University

São Paulo – [email protected]

Editorial 40 years of a dream

On August 19th, 2009, EMBRAER celebrated its first 40 years of existence. The company was created by a Federal Law, starting its operations joining the Government capital with the ones from private sectors, after several attempts to find entrepreneurs interested in face the challenges to produce aircraft in Brazil. This solution solved a long lasting problem to face a new start for the creation of a Brazilian aviation industry. The very first idea was to design a modest-sized prototype transport plane which should be stout and appropriated to operate on no-paved and relatively short runways, as the ones which predominated in smaller towns.

In the middle of the 1960’s the air transport world market behavior showed that, with the advent of jets – every time larger and faster -, the companies would be forced to give up working in communities which did not have the airport infrastructure or generate sufficient traffic to occupy at least 60 per cent of the seats in a aircraft. By that time, a small team of aeronautical engineers, graduated by the ITA – Aeronautical Institute of Engineering, observed that that prevailing behavior identified a real market niche, barely perceived by the large worldwide commercial airplane producers, such as BOEING, CONVAIR, DOUGLAS, LOCKHEED, AEROSPATIALE and others.

In this way, in these forty years, EMBRAER achieved a remarkable success. Its products are operating in more than 80 countries and it is considered as one of the largest manufacturer of commercial aircraft, mainly producing jets for regional transports among other products.

Over the years, due to its strategy, the company could grow and always innovating conquered important standings in the international market place. Nowadays, EMBRAER shows a large capacity of production, showing in its production line a variety of types and models of airplanes, having established a good capacity of meeting the most varied options required by interested buyers. In its very competitive market the Brazilian planes generally show characteristics and performance capable to respond to what is expected in most varied regions and countries. Moreover the commercial jets, the current line of products, includes military, executive and specialized planes, and it is astonishing to observe the flexibility and the efficiency with which the specialized teams operate the complexity, sophistication of the modern aircraft, carrying out the most advanced equipments and technologies.

Regional air transport system emerged from simple ideas of the past and today shows to be one of the most successful air transport segments. According to IATA –the International Association of Air Transport – the activity of international flights accumulates heavy financial losses. The operation of domestic flights faces difficulties in several regions throughout the world. However, smaller businesses, specialized in meeting the needs of micro-regions, with high efficiency and frequency, using some of EMBRAER’s and its competitors’ aircraft, manage to remain firmly settled. Indeed, many of them show outstanding results, providing a range of services and linking cities which, certainly could remain isolated from advancement and progress.

ITA institution created by the Brazilian Armed Forces at the end of the 1940’s, was the real responsible for the existence of EMBRAER, demonstrating the importance and the power of transforming of a quality education, geared to the country development and progress. We should cast our eyes back to the visionaries and pioneers of that period, captained by

*Mr. Ozires Silva joined the Brazilian Air Force in 1948 and graduated Military Pilot Officer in 1951. He graduated in Aeronautics Engi-neering in 1951 at ITA (Technological Institute of Aeronautics).After graduating from a Masters in Aeronautics Sciences in 1966 from California Institute of Technology (CALTECH), he returned to Brazil to lead the development of the BANDEIRANTE aircraft and to promote the creation of EMBRAER, where he was elected in 1970 as its first CEO.He has honorary degree in Engineering from Queen’s University of Belfast, Ireland. He has been appointed to the Smithsonian Institution Hall of Fame (Washington, DC) and World Trade Association Hall of Fame (Los Angeles, CA).He authored several books: “A Decolagem de um Sonho: História da Criação da Embraer”, “Cartas a um Jovem Empreendedor”, “Casimiro Montenegro Filho, a trajetória de um visionário”, “Nas Asas da Educação: A trajetória da Embraer”, “A Decolagem de um Grande Sonho”, and “Etanol a revolução verde e amarela”.Since 2008 he has taken the Provost position at UNIMONTE (Monte Serrat University) in Santos, SP.

Page 7: Vol.1 N.2 - Journal of Aerospace Technology and Management

Brigadier Casimiro Montenegro Filho, who, displaying fantastic perspicacity, laid the foundations of the results we collected today.

Remembering this example, we should ponder and mediate that, in the global world in which we live today, it is necessary to scrutinize the future and identify opportunities for the development of activities capable of safeguarding the quality of population life. Nowadays the challenges are greater, since competition among products and services is universal and Brazil cannot remain by the sidelines. In this way, every Brazilian should think how he can contribute and sum with his individual initiatives to collective effort. Every country cannot take the risk of not accompanying the important developments and take all visionary steps to head a creative and enterprising society to assure everyone the opportunities to participate of the building up of the new world in which we live.

Page 8: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 135

João Luiz F. Azevedo*Institute of Aeronautics and Space

São José dos Campos – Brazil [email protected]

Heidi KorzenowskiVSE- Vale Solutions in EnergySão José dos Campos – Brazil

[email protected]

*author for correpondence

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculationsAbstract: A comparison of five different spatial discretization schemes is performed considering a typical high speed flow application. Flowfields are simulated using the 2-D Euler equations, discretized in a cell-centered finite volume procedure on unstructured triangular meshes. The algorithms studied include a central difference-type scheme, and 1st- and 2nd-order van Leer and Liou flux-vector splitting schemes. These methods are implemented in an efficient, edge-based, unstructured grid procedure which allows for adaptive mesh refinement based on flow property gradients. Details of the unstructured grid implementation of the methods are presented together with a discussion of the data structure and of the adaptive refinement strategy. The application of interest is the cold gas flow through a typical hypersonic inlet. Results for different entrance Mach numbers and mesh topologies are discussed in order to assess the comparative performance of the various spatial discretization schemes.Keywords: Hypersonic flow, Cold gas flow, Finite volume method, Unstructured grids, Spatial discretization schemes.

INTRODUCTION

The present work considers that the flowfields of interest are simulated using the 2-D Euler equations. For such hyperbolic equations, the physical propagation of perturbations occurs along characteristic lines. The schemes based on central spatial discretizations possess symmetry with respect to a change in sign of the Jacobian matrix eigenvalues which does not distinguish upstream from downstream influences. In such case, these schemes do not consider physical properties of the flow equations into the discretized formulation and this generates oscillations in the vicinity of discontinuities which have to be damped by the addition of artificial dissipation terms. The problem is, therefore, to determine the adequate amount of artificial dissipation which should be large enough to damp instabilities and, at the same time, small enough to avoid the destruction of flow features.

Upwind schemes take into account physical properties in the discretization process and they have the advantage of being naturally dissipative. Flux vector splitting methods introduce the information of the sign of the eigenvalues in the discretization process, and the flux terms are split and discretized according to the sign of the associated propagation speeds. Steger and Warming (see, for instance, Steger and Warming, 1981, and Hirsch, 1990) make use

of the homogeneous property of the Euler equations and split the flux vectors into forward and backward contributions by splitting the eigenvalues of the Jacobian matrix into non-negative and non-positive groups. The split flux contributions are, then, spatially differentiated according to one-sided upwind discretizations. However, these forward and backward fluxes are not differentiable when an eigenvalue changes sign, and this can produce oscillations at sonic points. In order to avoid these oscillations, van Leer (1982) determines a continuously spatially differenced flux vector splitting that leads to smoother solutions at sonic points.

In the present work, the interface fluxes are calculated using five different algorithms, including a central difference-type scheme, and van Leer (1982) and Liou (1996) flux vector splitting schemes. In the central difference case, the interface fluxes are obtained from an average vector of conserved variables at the interface, which is calculated by straightforward arithmetic averages of the vector of conserved variables on both sides of the interface. Since this approach provides no numerical dissipation terms to control nonlinear instabilities, an appropriate blend of undivided Laplacian and biharmonic operators is explicitly added as the necessary artificial dissipation terms. For the first-order van Leer scheme, the interface fluxes are obtained by van Leer’s formulas (van Leer, 1982) and they are constructed using the conserved properties for the i-th control volume and its neighbor across the given Received: 03/09/09

Accepted: 06/10/09

Page 9: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009136

Azevedo, J. L. F., Korzenowski, H.

interface. The second order scheme considers a MUSCL approach (Anderson, Thomas and van Leer, 1986), that is, the interface fluxes are formed using left and right states at the interface, which are linearly reconstructed by primitive variable extrapolation on each side of the interface. The extrapolation process is effected by a limiter in order to avoid the creation of new local extrema. The first- and second-order Liou schemes consider that the convective operator can be written as a sum of the convective and pressure terms (Liou, 1996). The second-order scheme also considers a MUSCL approach.

The Euler equations are discretized in a cell-centered finite-volume-based procedure on unstructured triangular meshes. Time march uses a fully explicit, 2nd-order accurate, five-stage Runge-Kutta time stepping scheme. Only steady-state calculations have been considered in the present context, and variable time stepping and implicit residual smoothing procedures have been employed to accelerate convergence to steady-state. Computations using a fine, fixed, unstructured mesh are compared to those obtained with an adaptive mesh procedure in order to assess the quality of the solutions calculated by the different schemes implemented and in order to analyze the mesh influence in the capture of the flow features of interest.

The schemes discussed here are applied to the solution of supersonic/hypersonic inlet flows. A 2-D inlet configu- ration which is representative of some proposed inlet geometries for a typical transatmospheric vehicle is considered. The inlet entrance conditions were varied from a freestream Mach number M

∞ = 4 up to M

∞ = 16

in order to test the schemes implemented for a wide range of possible inlet operating conditions. The fluid was treated as a perfect gas and, hence, no chemistry was taken into account. From a physical standpoint, the present simulations are typical of cold gas flows which are usually achieved in experimental facilities such as gun tunnels. This is certainly not representative of actual flight conditions in which dissociation and vibrational relaxation are important phenomena, especially for the higher Mach number cases. However, it is a necessary step in order to construct a robust code to deal with the complete environment encountered in actual flight.

THEORETICAL FORMULATION

The 2-D time dependent Euler equations can be written in integral form as

, (1)

where P→= Eî + Fĵ. The application of the divergence theorem to Eq. (1) will yield

, (2)

where V represents the area of the control volume, S is its boundary and n→ is the outward normal to the S boundary.

The vector of conserved quantities, Q, and the convective flux vectors, E and F , are given by

, (3)

. (4)

Here, ρ denotes the density, p is the pressure, u and v represent the Cartesian velocity components, and e is the total energy per unit of volume.

If the equations are discretized using a cell-centered finite-volume-based procedure, the discrete vector of conserved variables, Qi , is defined as an average over the i-th control volume as

. (5)

In this context, the discrete flow variables can be assumed as attributed to the centroid of each cell if necessary. With the previous definition of Qi , Eq. (2) can be rewritten for the i-th volume as

. (6)

SPATIAL DISCRETIZATION ALGORITHMS

Spatial discretization is essentially concerned with finding a discrete approximation to the surface integral in Eq. (6). This approximation is the so-called convective operator, C (Qi ), i.e.,

. (7)

Centered Scheme

In the centered scheme case, the convective operator is defined as

. (8)

In this expression, Qik is the arithmetic average of the conserved properties in the cells which share the ik

Page 10: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 137

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

interface, where i is the i-th control volume and k is its neighbor. The terms ∆xik and ∆yik are calculated as

, (9)

where the points (xk1 , yk1) and (xk2 , yk2) are the vertices which define the interface between cells i and k (Azevedo, 1992).

The spatial discretization procedure presented in Eq. (8) is equivalent to a central difference scheme. Therefore, artificial dissipation terms must be added in order to control nonlinear instabilities (Jameson and Mavriplis, 1986). In the present case, the artificial dissipation operator, D(Qi), is formed as a blend of undivided Laplacian and biharmonic operators (Mavriplis, 1988, and Mavriplis, 1990). These mimic, in an unstructured mesh context, the concept of using second and fourth difference terms (Jameson, Schmidt and Turkel, 1981, and Pulliam, 1986). Therefore, the artificial dissipation operator is given by

, (10)

where d(2) (Qi ) represents the contribution of the Laplacian operator and d(4) (Qi ) represents the contribution of biharmonic operator.

In order to form the biharmonic operator, it is necessary to first define the undivided Laplacian operator for the i-th control volume as

, (11)

where the summation in k is taken over all control volumes which have a common interface with the i-th cell. The biharmonic operator is, then, defined as (Azevedo, 1992, and Azevedo and Oliveira, 1994)

(12)

The Laplacian operator is responsible for avoiding oscillations near discontinuities and it is constructed as

. (13)

Here, the coefficient єik(2) is given by

, (14)

where the switching function νi is defined in terms of the local pressure gradient as

. (15)

Close to discontinuities, the biharmonic operator produces oscillations. Therefore, the coefficient єik

(4) is defined such that it is switched off when the second difference coefficient, єik

(2) , becomes large. This typically occurs near shocks or other discontinuities. The єik

(4) coefficient is defined as

. (16)

Typical values for the constants (Mavriplis, 1988) are K(2) = 1/4 and K(4) = 3/256.

First-Order Van Leer Scheme

The convective operator, C (Qi ), is defined for the van Leer flux vector splitting scheme (van Leer, 1982, and Anderson, Thomas and van Leer, 1986) by the expression

, (17)

where ∆xik and ∆yik are given by Eq. (9). In the present case, the interface fluxes, Eik and Fik , are defined as (Azevedo and Figueira da Silva, 1997)

, (18)

.

Here, Ei± and Fi

± are the split fluxes calculated using van Leer’s formulas (van Leer, 1982, and Anderson, Thomas and van Leer, 1986) and the conserved properties of the i-th control volume. The evaluation of the split fluxes in the van Leer context can be summarized as follows:

.

(19)

In the previous equations, the Mach number in the x-direction is defined as Mx = u/a and the split mass fluxes are f± = ±ρa [(Mx

± 1) /2]2 . Similar expressions are obtained for F± using My = v/a. With this flux vector definition, the splitting is continuously differentiable at sonic and stagnation points.

Page 11: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009138

Azevedo, J. L. F., Korzenowski, H.

Second-Order Van Leer Scheme

In the present work, the implementation of the 2nd-order van Leer scheme is based on an extension of the Godunov approach. The projection stage of the Godunov scheme, in which the solution is projected in each cell on piecewise constant states, is modified. This constitutes the so-called MUSCL (Monotone Upstream-Centered Scheme for Conservation Laws) approach (van Leer, 1979) for the extrapolation of primitive variables. By this approach, left and right states at a given interface are linearly reconstructed by primitive variable extrapolation on each side of the interface, together with some appropriate limiting process (Hirsch, 1990) in order to avoid the generation of new extrema. The vector of primitive variables is taken as W = [p, u, v, T ]T , in the present case. The convective operator, C (Qi ), can be defined as indicated in Eq. (17). The interface fluxes, Eik and Fik , are defined as

,, (20)

where QL = Q(WL ) and QR = Q(WR ) are the left and right states at the ik interface obtained by the linear extrapolation process previously discussed.

There are two aspects of the unstructured grid implementation of such a scheme which deserve further consideration. The first aspect concerns the definition of “left” and “right” states at a given cell interface. Since there is no concept similar to curvilinear coordinates in this case, the cell interfaces can have virtually any orientation and one must decide which way to “look” in order to construct left and right states. This is done in the present case based on the components of the vector normal to the edge, as already indicated in Eq. (18) for the 1st-order van Leer scheme. The other aspect is associated with deciding which second control volume will be used for the reconstruction process in addition to the volume immediately adjacent to the interface considered. The authors emphasize that an edge-based data structure (Mavriplis, 1988) is being used in this development and further discussion of the data structure used will be presented later in the paper.

The procedure adopted in the present case to handle the second aspect is inspired by the work of Lyra (1994). The major difference between the present implementation and the cited reference lies in the direction in which the one-dimensional stencil is constructed. In Lyra (1994), the stencil for extrapolation is constructed along the direction of the edge. It must be emphasized that Lyra (1994) is working with a finite element approach. Here, since a cell-centered finite volume method is of interest, the extrapolation stencil is constructed in a direction normal

to the edge. In an attempt to reinterpret the 1-D ideas in the present unstructured grid context, a line is drawn normal to the edge and passing through the center of the inscribed circle. A third point is located over this line at a distance from the center of the inscribed circle equal to the diameter of the circle. The code, then, identifies in which control volume this 3rd point lies, and it uses the properties of this triangle in the linear reconstruction of the primitive variables.

In order to make the nomenclature clear, the two triangles which are adjacent to the edge under consideration are denoted i and k. The second triangle identified by the previously described process and associated with triangle i is denoted l. The corresponding one associated with k is denoted triangle m. This is illustrated in Fig. 1. Therefore, in the calculation of the E± fluxes, the left state, QL , is defined using the properties of the i and l triangles and the right state, QR , is defined using those of the k and m triangles, if ∆yik ≥ 0. The reverse is true if ∆yik < 0. Similarly, the definition of the F± fluxes uses data at the i and l triangles to define the left state and data at the k and m triangles to define the right state if ∆xik ≤ 0, and vice-versa if ∆xik > 0.

Figure 1: Sketch of the extrapolation stencil used for primi-tive variable linear reconstruction in the 2nd-order upwind schemes.

With the procedure just described, the state variables are represented as piecewise linear within each cell, instead of piecewise constant. But even considering a 2nd-order flux vector splitting scheme with a MUSCL approach, it is possible to obtain oscillations in the solution. Therefore one must use nonlinear corrections, namely limiters, to avoid any oscillations. In the present case, a simple minmod

Page 12: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 139

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

limiter (Hirsch, 1990) is adopted. Previous experience (Azevedo and Figueira da Silva, 1997) with other limiters, such as the van Leer, van Albada and superbee limiters have indicated that these may not reach machine zero convergence in some cases. On the other hand, the minmod limiter was always able to drive convergence to machine zero in the cases tested in Azevedo and Figueira da Silva (1997) and it was, therefore, the limiter chosen for the present study. In order to obtain the expression for the limiter, one has to compute the ratios of consecutive variations. The limiter will be defined as a function of these ratios. Hence, if one defines

, (21)

,

the limiters, which will be denoted by φ− and φ+ , can be written in the minmod case as

(22)

With the previous definitions, the left and right states at the interface can be written as:

(23)

The functions F− and F+ reconstruct, respectively, the WL and WR states, and they are given by

, (24)

,

where φ− and φ+ are the limiters previously defined.

First-and Second-Order Liou Schemes

The Liou schemes implemented in this work consider that the convective operator can be expressed as a sum of

the convective and pressure terms (Liou, 1994, and Liou, 1996). The inviscid flux vectors can be written as

,, (25)

where the Φ, Px and Py vectors are defined as

. (26)

In the previous expressions, p is the pressure, H is the total specific enthalpy, Mx = u/a, and My = v/a.

The approach followed in the present work in order to extend Liou’s ideas (Liou, 1994) to the unstructured grid case consists of defining a local one-dimensional stencil normal to the edge considered. The reason for this can be perceived if one observes, based on Eq. (17), that the contribution of the ik edge to the convective operator can be written as

. (27)

where the normal n→ik to the ik edge, positive outwards with respect to the i-th triangle, is defined as

. (28)

Here, ℓik is the length of the ik edge. Hence, one can write

. (29)

where, for now, it is sufficient to write Fik(c) and Pik as

. (30)

For the construction of the first-order scheme, one must identify the “left” (or L) state, as defined in Liou (1994, 1996), as the properties of the i-th triangle and the “right” (or R) state as those of the k-th triangle (see Fig. 1 for the geometry definition). Hence, the interface Mach number, Mik , also according to the definition in Liou (1994, 1996), can be written as

, (31)

where ML+ = M+ (ML ) and MR

− = M−(MR ). The split Mach numbers are defined as

Page 13: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009140

Azevedo, J. L. F., Korzenowski, H.

, (32)

and, similarly,

, (33)

The Mβ± terms can be written as

. (34)

The present work used β = 1/8, as suggested in Liou (1994). Moreover, in order to achieve a unique splitting in Liou’s sense, the left and right Mach numbers are defined as

, (35)

where

,. (36)

The corresponding speed of sound, aik, at the interface is given by

, (37)

where

,

(38)

and a similar definition for ãR. The pressure, pik, at the ik interface is given by

. (39)

The split pressures, still following the expressions in Liou (1994, 1996), can be written as

, (40)

and, similarly,

, (41)

The pα± terms can be written as

. (42)

This work used α = 3/16, as suggested in Liou (1994). The convective operator, as defined in Eq. (17), can be finally written as

, (43)

where

(44)

and Pik has already been defined in Eq. (30). The second order scheme follows exactly the same formulation, except that the left and right states are obtained by a MUSCL extrapolation of primitive variables as described in the previous section. Therefore, the left state is defined by a limited extrapolation of the properties in the i-th and l-th triangles, and the right state is defined by a limited extrapolation of the properties in the k-th and m-th triangles. The minmod limiter was again used in this case.

TIME DISCRETIZATION

The Euler equations, fully discretized in space and assuming a stationary mesh, can be written as

, (45)

where the D(Qi ) operator is identically zero if an upwind spatial discretization is used. The present work uses a fully explicit, 2nd-order accurate, 5-stage Runge-Kutta time-stepping scheme (Mavriplis, 1988) to advance the solution of the governing equations in time. The time integration scheme can, therefore, be written as

,

,

, (46)

where the superscripts n and n + 1 indicate that these are property values at the beginning and at the end of the n-th time step. The values used for the α coefficients were

. (47)

It should be observed that the convective operator, C (Q), is evaluated at every stage of the integration process, but the artificial dissipation operator, D(Q), is only evaluated at the two initial stages (and, obviously, only for the central difference scheme). For steady-state problems, a local time stepping option has been implemented

Page 14: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 141

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

in order to accelerate convergence. The details of the implementation of the variable time step option can be found in Azevedo and Figueira da Silva (1997).

DATA STRUCTURES

In a cell-centered finite volume context, the standard procedure for flux calculation consists of a loop over the control volumes which adds up the contribution of each edge, or side, to form the flux balance for that particular volume. This is usually called a volume-based data structure, which is the equivalent in the present case of an element-based data structure for the finite element community. Although “natural” and straightforward to implement, this procedure is not the most efficient because fluxes end up being computed twice for each edge of the control volume. For an explicit scheme, this means that the code could theoretically run twice as fast simply by implementing some procedure that would avoid recomputing the fluxes for the same edge.

One of the possibilities for solving this problem is to implement a so-called edge-based (or side-based) data structure (Mavriplis, 1988). In this case, the idea is to index the code computations based on the control volume edges. The discussion presented here considers a triangular unstructured grid. However, a similar procedure could be implemented regardless of the type of control volume used. The connectivity information for a cell-centered finite volume algorithm on a volume-based data structure consists of two major “tables.” The first one indicates, for each triangle, the nodes of the mesh which form the triangle. The other table points to the three triangles which are neighbors of the particular triangle considered. For an edge-based data structure, the connectivity information is centered on the edges and, for each edge, enough information should be stored to allow the necessary computations over the complete grid.

In the present work, since a cell-centered scheme is being used, the following procedure is adopted:

For each edges store: (n1, n2, i, k) . (48)

Here, n1 and n2 are the two nodes which define the edge, i is the triangle to the left of the n1n2 segment, and k is the triangle to the right of it (see Fig. 1 for details). Moreover, the n1n2 segment is assumed to be oriented from n1 to n2 . This notion of orientation of a segment is fundamental to the algorithm because, with the present implementation, the nodes n1 and n2 are arranged in a counterclockwise fashion for the i-th control volume and in a clockwise fashion for the k-th control volume. Therefore, the flux computed for this particular edge is added to the flux balance equation of the i-th control volume and subtracted

from that of the k-th control volume. Hence, for an edge-based data structure, the main loop runs over edges, or sides, and the contribution of the side to the neighboring control volumes is computed and added (or subtracted) to (from) that volume’s flux balance equation.

The previous information would be enough for the centered scheme and for the first-order upwind schemes implemented here. However, as already discussed, further information is necessary in order to implement the second-order versions of the upwind schemes. For the second-order upwind schemes, the edge-based information stored must be augmented in order to also include the identification of the two additional triangles which are used for the linear reconstruction process. Hence, using the nomenclature previously defined, one should:

For each edges store: (n1, n2, i, k, l, m) . (49)

The procedure used to define triangles l and m has already been previously described in the paper. The search operations necessary to identify these triangles are performed at a pre-processing stage, such that the computational cost associated with this search is negligible in the overall solution cost. It should be emphasized that this identification must also be performed after each adaptive refinement pass, since the complete connectivity information is updated in the refinement process.

ADAPTIVE GRID REFINEMENT

The concept behind using an adaptive mesh strategy is to refine regions where large gradients occur. For many problems, the regions that need to be refined are small compared with the size of the complete computational domain. Therefore, one can reduce storage and CPU requirements by the use of adaptive refinement, when compared with a fixed fine mesh which would yield the same resolution of the relevant flow features. In order to identify the regions that require grid refinement, a sensor must be defined. The sensor used in this work is based on gradients of flow properties. Its general definition could be expressed as

, (50)

and φnmax and φnmin are the maximum and the minimum values of the φn property in the flowfield. Despite this general definition, and despite having implemented the complete sensor calculation as indicated in the above

Page 15: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009142

Azevedo, J. L. F., Korzenowski, H.

equation, all results presented in this work have used a sensor based on density gradients, i.e., φn = ρ.

The first step of the adaptive procedure is to compute the flow on an existing coarse mesh. With this preliminary solution, one can calculate the sensor as previously described. The code marks all triangles in which the sensor exceeds some specified threshold value (the threshold value will be denoted Γ in the present paper), and the marked triangles are refined. A new finer mesh is then constructed by enrichment of the original coarse grid.

The mesh enrichment procedure consists of introducing an additional node for each side of a triangle marked for refinement. For interior sides, this additional node is placed at the mid-point of the side whereas, for boundary sides, it is necessary to refer to the boundary definition to ensure that the new node is placed on the true boundary. After this initial pass, the code has to search all triangles to identify cells that have two or three divided sides. Each of these cells is subdivided into four new triangles. This subdivision may eventually mark new faces. Therefore, this process has to be performed until there are no triangles with more than one marked face. In order to avoid hanging nodes, the triangles that have one marked face should be divided by halving. Figure 2 illustrates the three possible ways of triangle subdivision.

The second part of the refinement process consists of identifying all triangles which were refined by halving. This information is stored for the next refinement step because, if there is again an attempt to subdivide these triangles by halving, this is not allowed. Experience has shown that repeated triangle division by halving has a strong detrimental effect in mesh quality. Therefore, if the next refinement step tries to divide by halving a triangle which was obtained by halving from a previous division, the logic in the code forces the original triangle to be

Figure 3: Initial and intermediate grids in the adaptive refine-ment procedure.

Figure 2: Schematic representation of the three possible trian-gle subdivision processes.

divided into four new triangles before the refinement procedure is allowed to continue. When the mesh enrichment procedure has been completed, the new control volumes receive the property values of their “father” triangle and the flow solver is re-started.

RESULTS AND DISCUSSION

A 2-D inlet configuration which is representative of some proposed inlet geometries for a typical transatmospheric vehicle was used as a test in the present work. To analyze the different schemes, an adaptive mesh and both a coarse and a fine fixed unstructured meshes were used. In the present work, the expression “fixed” mesh will denote a grid which was generated as close as possible to an equally spaced mesh in the unstructured context. Therefore, the expression “fixed grid” is being used here in opposition to the expression “adaptively refined” grid. The adaptive mesh was obtained with 3 passes of refinement using the 1st-order Liou scheme as the flow solver. The adaptive refinement process described in the previous section was used and the sensor was based on density gradients. The initial mesh had 399 nodes and 683 triangles. The successive refinement passes used threshold values Γ =(0.005, 0.005, 0.005). This mesh ended up with 11152 nodes and 21692 volumes. The initial mesh and the two intermediate meshes in this process are shown in Fig. 3. In the present case, 500 iterations were performed before the first refinement pass, 800 iterations between the first and the second ones, and 1200 iterations between the second and the third refinement passes. This represents a typical pattern observed in the present study

Page 16: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 143

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

in the sense that the optimal number of iterations between successive refinement passes increases as the grid is refined. The final mesh is shown as the bottom plot in Fig. 4. This is the adaptively refined grid which was used for comparison of the various schemes in the paper.

The coarse fixed grid had 4204 nodes and 8006 volumes. Results for coarser grids were also obtained, but these results were deemed either excessively poor for the purpose of the present comparisons or of comparable resolution with the ones obtained with the above referred grid. A fine fixed mesh was also generated and this grid had 12005 nodes and 23324 triangles. The major requirement in the generation of this fine fixed grid was to have an essentially equally spaced mesh with the number of nodes, or triangles, comparable to those of the final adaptively refined grid. Therefore, the three different meshes used in the calculations and comparisons, which are reported here, are presented in Fig. 4. The coarse fixed mesh is seen as the top plot in Fig. 4, the fine mesh is the middle one and the adaptively refined grid is the bottom plot in this figure.

For the present simulations, the fluid was treated as a perfect gas with constant specific heat and no chemistry was taken into account. The purpose of these simulations is to compare the different schemes applied to high Mach number flows in order to verify if they are able to represent all flow features, such as strong shocks, shock reflections and interactions, and expansion regions. Moreover, there is interest in verifying whether the schemes can avoid oscillations in the presence of such strong discontinuities.

The results considering an inlet entrance Mach number M∞ = 12 are discussed in detail in the paper. The Mach contours obtained with the five schemes are presented in Figs. 5–9 for the calculations with the coarse fixed mesh. The figures present, respectively, the results with the centered scheme, the 1st- and 2nd-order van Leer flux-vector splitting schemes and the 1st- and 2nd-order Liou AUSM+ schemes. The contours indicate that the overall flow features are well captured by all solutions, at least in the upstream portion of the inlet. However, they also suggest that, at least with this coarse fixed mesh, all schemes produce oscillations in the solution. The oscillations are more evident in the results with the centered scheme, as one might expect. Nevertheless, the somewhat ragged contours for the both upper and lower wall entrance shocks for all calculations are an indication that there are oscillations in these solutions. Moreover, the Mach number contours shown in Figs. 5–9 also indicate that the resolution of flow features downstream of the interaction region of the two entrance shocks is not very good with this coarse mesh. Essentially, one cannot see much of the shock reflections and expansions that should be present in these downstream regions.

A summary of the analysis of these figures indicates that the entrance flow features are well captured by the centered scheme, as already discussed, except that one can clearly see the oscillations upstream of the strong upper wall entrance shock. One can see in Figs. 6 and 8 that the 1st-order van Leer and 1st-order Liou schemes smooth out the spatial gradients by the intrinsic artificial dissipation present in these schemes, which is typical of 1st-order upwind discretizations. Moreover, the 2nd-order schemes implemented in this work presented a better shock-capturing capability compared with the other schemes. They do not have as much shock-smearing as their 1st-order versions and, at the same time, they do not present as much evidence of solution oscillation as the 2nd-order centered scheme. Unfortunately, as discussions

Figure 4: Complete view of the three computational meshes used in the present comparisons: (a) coarse fixed mesh; (b) fine fixed mesh; and (c) adaptive mesh.

Figure 5: Mach number contours obtained with the coarse fixed mesh for the centered scheme (M∞ = 12).

Page 17: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009144

Azevedo, J. L. F., Korzenowski, H.

later in this paper will show, only the analysis of the Mach number contours can be misleading as far as an overall study of solution oscillations is concerned.

Corresponding results for the fine fixed grid are shown in Figs. 10–14. These again consider an entrance Mach number M∞ = 12 and calculations with the five

discretization schemes are represented in these figures. The first aspect which is clearly evident from the figures is that the upstream entrance shocks are much better defined in the finer grid solution. Moreover, some of the downstream flow features, which could not be seen in the coarse grid solution, are now starting to become apparent in the fine grid. However, the grid resolution in the downstream portions of the flow is clearly still not sufficient to resolve all details of the flowfield in these regions, especially for the more dissipative 1st-order schemes.

The oscillations in the upper wall entrance shock for the centered scheme solution are also quite visible in this fine fixed grid solution, as shown in Fig. 10. These oscillations are restricted to a narrower region of the flow, as one should expect due to the increased mesh refinement, but they are still present in the solution. Moreover, oscillations in the lower wall entrance shock can also be seen in Fig. 10. The definition of the entrance shocks in the upwind solutions is improved with the current grid, both for the 1st- and the 2nd-order schemes. This improvement is consistent with the one observed for the centered scheme case. However, one can observe some sort of an inflection in the upper wall entrance shock for the 2nd-order van Leer flux-vector splitting scheme solution (see Fig. 12), which clearly does not have any physical meaning. Actually, it is possible to see a similar problem in the coarse grid solution with this scheme, shown in Fig. 7. The problem, however, becomes even more evident in the fine grid result shown in Fig. 12. A close inspection of the Mach number contours obtained with the 2nd-order AUSM+ scheme also reveals a similar inflection problem in the upper wall entrance shock. As one can see in Fig. 14, however, this spurious behavior is much less pronounced in the solution with the 2nd-order Liou scheme.

Despite the clear improvement in flow feature resolution provided by the finer fixed mesh, as already pointed out, an overall assessment of the previous results indicates that some aspects of the flow are still very poorly resolved even with this fine grid. In particular, one can observe that the lower wall entrance shock is quite smeared and that the downstream portions of the flow are not adequately resolved. Hence, the use of an adaptively refined mesh seemed to be the best approach in order to allow the grid density to be driven by the solution itself. The corresponding Mach number contours for freestream Mach number M∞ = 12, computed with the final adaptively refined mesh, are shown in Figs. 15–19. In general, these results indicate a much sharper definition of both upper and lower wall entrance shocks and of the flow features downstream of the shock interaction region. Although the full resolution of this interaction may still require further grid refinement, the results in Figs. 15–19 can already provide an idea of the flow structure in the downstream portions of the configuration.

Figure 6: Mach number contours obtained with the coarse fixed mesh for the 1st-order van Leer scheme (M∞ = 12).

Figure 7: Mach number contours obtained with the coarse fixed mesh for the 2nd-order van Leer scheme (M∞ = 12).

Figure 8: Mach number contours obtained with the coarse fixed mesh for the 1st-order Liou scheme (M∞ = 12).

Page 18: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 145

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

Figure 9: Mach number contours obtained with the coarse fixed mesh for the 2nd-order Liou scheme (M∞ = 12).

Figure 10: Mach number contours obtained with the fine fixed mesh for the centered scheme (M∞ = 12).

Figure 11: Mach number contours obtained with the fine fixed mesh for the 1st-order van Leer scheme (M∞ = 12).

Figure 13: Mach number contours obtained with the fine fixed mesh for the 1st-order Liou scheme (M∞ = 12).

Figure 12: Mach number contours obtained with the fine fixed mesh for the 2nd-order van Leer scheme (M∞ = 12).

Figure 14: Mach number contours obtained with the fine fixed mesh for the 2nd-order Liou scheme (M∞ = 12).

One can see in Fig. 15 that the centered scheme still exhibits oscillations in this case, especially near the upper wall inlet lip. However, a comparison of Figs. 4 and 15 indicates that the oscillations mostly occur in a region in which the mesh is quite coarse, i.e., they are in a region upstream of the densely refined mesh area due

to the presence of the upper wall shock. In any event, the centered scheme was not really expected to be able to cope with such strong shocks without oscillations. The Mach number contours for the calculations with the upwind schemes, however, also indicate the existence of oscillations in those solutions. For instance, the results

Page 19: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009146

Azevedo, J. L. F., Korzenowski, H.

with both 1st- and 2nd-order versions of the van Leer scheme, shown in Figs. 16 and 17, present a rather ragged first contour in the entrance shock region. Moreover, both calculations also present considerable smearing of the weaker lower wall shock. Although, it is true that even this shock is much better defined by the adaptively refined grid solution with the two versions of van Leer’s scheme than corresponding results with the other grids. The solutions with the van Leer schemes do not show much detail of the downstream portions of the flow. Again, one can see differences between the 1st- and 2nd-order results in this downstream region, but the scheme is clearly too diffusive despite the strong mesh refinement in the region.

An analysis based solely on the Mach number contours in Figs. 15–19 would indicate that the calculations with both versions of the AUSM+ scheme yield the best resolution of flow features in this case. Furthermore, the 2nd-order results in Fig. 19 provide the best definition of both upper and lower wall entrance shocks, of the result of the shock-shock interaction and of the downstream expansion

and compression regions. There are still indications of solution oscillations even for these results, especially near the upper wall inlet lip. However, they clearly provide the best overall description of the flow features among all schemes and different meshes analyzed. Unfortunately, as the forthcoming discussion will show, there are also serious problems with the Liou scheme solutions, both for the 1st- and 2nd-order versions of the scheme, which complicate the selection of a best overall result among the various tests performed.

Dimensionless pressure distributions along both the inlet upper and lower walls were also analyzed in order to obtain a better assessment of the solution quality for all test cases. As before, all cases consider an entrance Mach number M∞ = 12. An initial comparison shows plots of pressure distributions, obtained with each one of the spatial discretization schemes studied, for all three meshes. The analytical solution for the inlet entrance region is also shown in each figure. This solution is correct up to the point in which structures resulting from

Figure 15: Mach number contours obtained with the adaptively refined mesh for the centered scheme (M∞ = 12).

Figure 16: Mach number contours obtained with the adap-tively refined mesh for the 1st-order van Leer scheme (M∞ = 12).

Figure 17: Mach number contours obtained with the adaptive-ly refined mesh for the 2nd-order van Leer scheme (M∞ = 12).

Figure 18: Mach number contours obtained with the adap-tively refined mesh for the 1st-order Liou scheme (M∞ = 12).

Page 20: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 147

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

the shock-shock interaction start to impinge upon the inlet walls. Hence, Fig. 20 presents the dimensionless wall pressure distributions, for both upper and lower walls, obtained with the centered scheme. All calculations eventually reach the correct post-shock pressure plateaux, for both upper and lower walls. However, the numerical solutions approach their corresponding plateaux rather slowly, or over a fairly long longitudinal distance, and in a very oscillatory fashion for both fixed mesh solutions. The behavior of the pressure distribution obtained with the adaptive mesh is far less oscillatory. The curve for the coarse fixed mesh also presents a very distinctive pressure peak immediately upstream of the expansion corner in the upper wall. This is caused by a shock, resulting from the shock-shock interaction, which impinges upon the upper wall. This shock, however, cannot be seen in the Mach number contours shown in Fig. 5. In general, the results with the fine fixed grid and with the adaptive grid are similar for this case, except for the oscillations in the upper wall shock in the fixed grid solution, as already discussed.

The comparison of the results obtained with the two versions of the van Leer scheme is shown in Figs. 21 and 22, respectively for the 1st- and 2nd-order schemes. The pressure distributions in the upper wall shock are much less oscillatory in this case, especially for the 1st-order scheme solution. This is to be expected since this scheme is quite a bit more diffusive than the others tested here. Actually, the previous discussion has indicated that the van Leer scheme is more diffusive and, clearly, its 1st-order implementation is more diffusive than the 2nd-order one. The solution with the 2nd-order scheme again presents oscillations in this region of the flow for the coarse fixed grid. Aside from the problems already discussed in the previous case with regard to the entrance shocks, one can also observe that there are marked differences in the pressure distributions, obtained with the different meshes, in the downstream portion of the flow. This is true for

both 1st- and 2nd-order cases, but it seems to be more pronounced in the 1st-order results. Moreover, the results with the 2nd-order version of the scheme are indicating a gentle oscillation in the upper wall pressure distributions at x ≅ 70 cm. This feature can be seen in the results for all three meshes with the 2nd-order van Leer scheme, although its spatial position is slightly different depending on the grid. Such oscillation is clearly incorrect, since the pressure must be constant in this region.

The results with the 1st-order and the 2nd-order Liou schemes are shown in Figs. 23 and 24. The most distinctive feature of these results is that, in both cases, the solutions have strong oscillations at the upper wall entrance shock. There are oscillations in the lower wall shock too, but these are mild compared with the ones observed in the upper wall case. It is interesting that the same extreme oscillations are observed both in the 1st-order results as well as in the 2nd-order ones. The adaptive grid calculations present the results with the smallest oscillations in this case. However, even such milder oscillations would still be considered unacceptable if the present flow solver capability were to be coupled to the equations describing the real gas effects present in practice for such applications. One can also observe that there is good agreement among the pressure distributions, obtained with the different meshes in this case, for the downstream portions of the flow. The agreement is not as good for the case of the coarse fixed mesh, but this mesh is too coarse to resolve flow features in the downstream region anyway, as already discussed. Moreover, Figs. 23 and 24 are showing pressure distributions in the downstream portions of the flow which are quite different from the ones obtained with the van Leer scheme (see Figs. 21 and 22).

Further analysis of the results can be accomplished by looking at essentially the same data shown in Figs. 20 to 24, but from a different perspective. Therefore, Figs. 25–

Figure 19: Mach number contours obtained with the adaptively refined mesh for the 2nd-order Liou scheme (M∞ = 12).

Figure 20: Analysis of the mesh effect in the wall pressure distri-butions obtained with the centered scheme (M∞ = 12).

Page 21: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009148

Azevedo, J. L. F., Korzenowski, H.

28 allow for a more direct comparison of the discretization scheme effects on the solution, for a given mesh. As before, the dimensionless pressure distributions along the upper and lower inlet walls are being shown in these figures. The analytical solution for the pressure distribution

Figure 23: Analysis of the mesh effect in the wall pressure dis-tributions obtained with the 1st-order Liou scheme (M∞ = 12).

Figure 24: Analysis of the mesh effect in the wall pressure dis-tributions obtained with the 2nd-order Liou scheme (M∞ = 12).

Figure 25: Analysis of the discretization scheme effect in the wall pressure distributions obtained for the adaptively refined grid (M∞ = 12). Comparison of centered and 1st-order upwind schemes.

Figure 26: Analysis of the discretization scheme effect in the wall pressure distributions obtained for the adaptively refined grid (M∞ = 12). Comparison of centered and 2nd-order upwind schemes.

Figure 21: Analysis of the mesh effect in the wall pressure distributions obtained with the 1st-order van Leer scheme (M∞ = 12).

Figure 22: Analysis of the mesh effect in the wall pressure distributions obtained with the 2nd-order van Leer scheme (M∞ = 12).

along the upstream portion of both upper and lower inlet entrance walls is also shown for comparison purposes. The comparison in Fig. 25 includes the centered scheme and the two 1st-order upwind schemes, for solutions computed using the adaptively refined mesh. The analogous

Page 22: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 149

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

comparison including the two 2nd-order upwind schemes is presented in Fig. 26. Aside from some aspects which have already been discussed, such as the fact that the Liou scheme solutions are very oscillatory at the entrance shocks, one can state that, in general, there is a fairly good correlation between the results with the centered scheme and those with the AUSM+ scheme. This is true for both 1st- and 2nd-order implementations of the Liou scheme.

On the other hand, the results with the van Leer scheme are quite different from the others downstream of the expansion corners on both upper and lower walls. Although these differences are also present in the 2nd-order van Leer solutions, the discrepancies are more evident in the 1st-order results. Essentially, the solution for the 1st-order implementation of the van Leer scheme seems to indicate that shock waves impinge on the upper and lower inlet walls approximately at the location of the wall expansion corners. For the lower wall, it would be more precise to state that the impingement would occur at the upstream expansion corner. The results with the other schemes do not corroborate this observation. They show no shock impingement at the inlet upper wall. In this case, even the 2nd-order van Leer solution does not show any shock impingement on the upper wall. Moreover, for the lower wall, both 1st- and 2nd-order van Leer solutions are fairly similar and, again, they are completely different from the wall pressure distributions obtained with the other schemes in this downstream flow region. Nevertheless, the wall pressure distributions obtained with the van Leer method indicate that this scheme is the most successful in preventing oscillations, among the algorithms tested, across the strong upper wall entrance shock. This is particularly true for the 1st-order version of the scheme.

A similar comparison is shown in Figs. 27 and 28 for the calculations performed with the fine fixed grid. The more

relevant comments which can be made in this case are essentially equivalent to those already discussed in the context of the analysis of Figs. 25 and 26. In any event, it is interesting to observe that the pressure distributions obtained with the van Leer scheme are very similar to those calculated by the other schemes in this case, especially for the 2nd-order version of the method. The 1st-order van Leer results, particularly for the upper wall, are still quite different from the pressure distributions obtained with the other schemes. Unfortunately, the better correlation observed with the fine fixed grid can simply be the result of having a mesh which is too coarse in the downstream regions of the flow to actually capture the phenomena that should be present there.

Finally, pressure contours obtained with the adaptively refined mesh are shown in Figs. 29–31. These figures present, respectively, the contours for the solutions with the centered scheme, the 1st-order Liou scheme and the 2nd-order Liou scheme. The major objective of including these figures here is to provide further understanding of the flow features especially in the downstream regions. The pressure contours seem to be more revealing for the flow structures which appear downstream of the shock-shock interaction region. In general, the three solutions are quite similar in this case, as the previous discussions have already indicated. The more diffusive character of the 1st-order scheme is not as evident in Fig. 30, except for the thicker upper wall entrance shock. Pressure contours calculated with the fixed meshes (not shown here) would indicate that the additional numerical diffusivity of the 1st-order scheme would destroy some of the information in the downstream region. Moreover, it is also clear that the upper wall entrance shock is more sharply defined by the 2nd-order upwind solution than by either the centered or the 1st-order upwind calculations. The figures also seem to indicate that further refinement of the interaction region

Figure 27: Analysis of the discretization scheme effect in the wall pressure distributions obtained for the fine fixed grid (M∞ = 12). Comparison of centered and 1st-order upwind schemes.

Figure 28: Analysis of the discretization scheme effect in the wall pressure distributions obtained for the fine fixed grid (M∞ = 12). Comparison of centered and 2nd-order upwind schemes.

Page 23: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009150

Azevedo, J. L. F., Korzenowski, H.

would still be necessary in order to fully characterize these downstream structures.

It is important to emphasize that similar calculations were performed for inlet entrance Mach numbers M∞ = 4, 8 and 16, in the context of the present study. These results

Figure 29: Dimensionless pressure contours obtained with the adaptively refined mesh for the centered scheme (M∞ = 12).

Figure 30: Dimensionless pressure contours obtained with the adaptively refined mesh for the 1st-order Liou scheme (M∞ = 12).

Figure 31: Dimensionless pressure contours obtained with the adaptively refined mesh for the 2nd-order Liou scheme (M∞ = 12).

are not included here because the conclusions that can be drawn are essentially equivalent to those obtained with the M∞ = 12 solution. As one could clearly expect, the oscillations observed in essentially all calculations here reported decrease as the inlet entrance Mach number is lowered. In a similar fashion, results for the M∞ = 16 case are even more oscillatory than those here discussed.

Moreover, the authors would also like to emphasize that each case could be directly run with the adaptive refinement capability. This was not done in the present work because the final meshes, that would be obtained in such case, would be different since there are small differences in the converged solutions obtained with the various schemes. Therefore, the authors have chosen to compare the solutions obtained in a single mesh generated by an adaptive refinement procedure using one of the available spatial discretization schemes. Moreover, the most relevant comparisons in the present case must be those between the adaptively refined mesh results and the ones obtained with the fine fixed mesh, because these two meshes have approximately the same number of control volumes. As the results presented in the paper have demonstrated, the quality of the solutions obtained with the adaptive grid is certainly better, for the same computational cost.

Furthermore, it is also important to emphasize that, in actual flight, an inlet flow with entrance Mach number equal to 12, or 16, could not be simulated with the perfect gas assumption. In other words, real gas behavior would have to be taken into account. From a physical standpoint, however, the present calculations could be considered as the simulation of the cold gas flows which are usually achieved in experimental facilities such as gun tunnels. In order to extrapolate these results to actual flight conditions, dissociation and vibrational relaxation would certainly have to be included in the formulation. Nevertheless, the present simulations could be seen as a necessary step in the construction of a robust code to deal with the complete environment encountered in actual flight.

CONCLUDING REMARKS

The present work performed a comparison of five different spatial discretization schemes for cold gas hypersonic flow simulations. The schemes presented here were applied to the solution of supersonic and hypersonic inlet flows. The inlet entrance conditions were varied from M∞ = 4 up to M∞ = 16. An inviscid formulation was used and the fluid was treated as a perfect gas. Clearly, for actual flight condition simulation, real gas effects would have to be taken into account. Here, however, the consideration of very high Mach number flows simply has the objective of testing the behavior of the different schemes in the presence of strong shocks.

Page 24: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 151

An assessment of unstructured grid finite volume schemes for cold gas hypersonic flow calculations

The governing equations are discretized in an unstructured triangular mesh by a cell-centered finite volume algorithm. An edge-based data structure is used to store the connectivity information and this has yielded an efficient procedure for interface flux calculations. The equations are advanced in time by an explicit, 5-stage, 2nd-order accurate, Runge-Kutta time stepping procedure. The spatial discretization considers a 2nd-oder centered scheme and two upwind schemes, namely a van Leer and a Liou flux-vector splitting scheme, with both 1st- and 2nd-order implementations. The authors believe that the form in which the Liou scheme has been implemented in the present unstructured grid context represents an original contribution, since the splitting is performed in a direction normal to the triangular cell edges. Therefore, instead of having to compute x and y splittings for a 2-D flow, only one single splitting calculation is performed per cell edge in the edge-normal direction.

The implementation of the 2nd-order versions of the two upwind schemes uses MUSCL reconstruction in order to obtain left and right states at interfaces. An original procedure for performing this reconstruction is presented which defines a 1-D stencil in the edge-normal direction and, therefore, obviates the need to compute flow property gradients at each cell. This 1-D stencil is constructed by identifying an additional triangle along the edge-normal direction which is used for the linear reconstruction process. All search operations necessary for this identification are performed at a pre-processing stage, yielding a very efficient algorithm. Moreover, the 2nd-order versions of the upwind schemes require the implementation of limiters in order to try to minimize oscillations at discontinuities. A few different limiters were actually coded, but only results with the minmod limiter were reported here. Previous experience with the other limiters has indicated that most of them fail to converge to machine zero, whereas the minmod limiter typically reaches machine zero for the cases analyzed here.

Results with unstructured fixed meshes, both coarse and fine, were obtained and compared with those calculated with an appropriate adaptively refined mesh. The various calculations indicate that it is possible to obtain converged solutions with centered schemes, even for the very high Mach number flows considered in the present work. However, these solutions will most certainly be oscillatory. Moreover, the solutions with both 1st- and 2nd-order versions of the Liou scheme are also quite oscillatory, especially across the strong upper wall entrance shock. The use of adaptively refined meshes has contributed to reduce the oscillations in all cases. On the other hand, this has not been enough to completely remove the oscillations in the cases in which they appear. The 1st-order van Leer flux vector splitting scheme has drastically reduced the

flow property oscillations. However, as one could expect, this 1st-order method also causes considerable smearing of the flow discontinuities due to the excessive intrinsically added artificial dissipation.

Among the various schemes implemented, the 2nd-order AUSM+ method has provided the sharpest shock definitions. This is true both with fixed and with adaptively refined meshes. However, even with the adaptively refined mesh, the 2nd-order Liou scheme has shown overshoots in the pressure distributions at the upper wall entrance shock. The situation is a lot worse for the fixed mesh solutions with this scheme. Moreover, one must also observe that both 2nd-order upwind methods have a slower convergence rate than the other schemes implemented. Furthermore, for the higher Mach number cases, the 2nd-order implementation of the Liou scheme was not able to reach machine zero, even with the minmod limiter.

The mesh adaptation procedure implemented was able to generate good quality meshes for the cases considered in the present work. The adaptation strategy identified the more relevant high gradient areas and provided an adequate grid point clustering in the important regions. Moreover, some simple mesh smoothing procedures have also been implemented, through point movement and diagonal swapping techniques, which contributed to the high quality of the meshes after refinement. It is also important to emphasize that the tests conducted in the context of the present work have only used a sensor based on flow density gradients. Although this has produced good results for the present cases, one can conceivably argue that there are other important cases in which this approach would not be the most appropriate. Therefore, further testing would clearly be necessary in order to achieve a more robust strategy for the sensor definition.

ACKNOWLEDGMENTS

The authors gratefully acknowledge the partial support for this research provided by Conselho Nacional de Desen- volvimento Científico e Tecnológico, CNPq, under the Integrated Project Research Grant No. 312064/2006-3. This work is also supported by Fundação de Amparo à Pesquisa do Estado de São Paulo, FAPESP, through Process No. 2004/16064-9.

REFERENCES

Anderson, W. K., Thomas, J. L., Van Leer, B., 1986, “A Comparison of Finite Volume Flux Vector Splittings for the Euler Equations”, AIAA Journal, Vol. 24, No. 9, pp. 1453-1460.

Page 25: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009152

Azevedo, J. L. F., Korzenowski, H.

Azevedo, J. L. F., 1992, “On the Development of Unstructured Grid Finite Volume Solvers for High Speed Flows”, IAE, São José dos Campos, Brazil, (Report NT-075-ASE-N/92).

Azevedo, J. L. F., Figueira da Silva, L.F., 1997, “The Development of an Unstructured Grid Solver for Reactive Compressible Flow Applications”, 33rd AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, Seattle, WA (AIAA Paper 97-3239).

Azevedo, J. L. F., Oliveira, L. C., 1994, “Unsteady Airfoil Flow Simulations Using the Euler Equations”, Proceedings of the 12th AIAA Applied Aerodynamics Conference, Part 2, Colorado Springs, CO, pp. 650-660, (AIAA Paper 94-1892-CP).

Hirsch, C., 1990, “Numerical Computation of Internal and External Flows”, Vol. 2: Computational Methods for Inviscid and Viscous Flows, Wiley, New York, Chap. 20, pp. 408-443.

Jameson, A., Mavriplis, D., 1986, “Finite Volume Solution of the Two-Dimensional Euler Equations on a Regular Triangular Mesh”, AIAA Journal, Vol. 24, No. 4, pp. 611-618.

Jameson, A., Schmidt, W., Turkel, E., 1981, “Numerical Solution of the Euler Equations by Finite Volume Methods Using Runge-Kutta Time-Stepping Schemes”, AIAA 14th Fluid and Plasma Dynamics Conference, Palo Alto, CA (AIAA Paper 81-1259).

Liou, M.S., 1994, “A Continuing Search for a Near-Perfect Numerical Flux Scheme. Part I:AUSM+ ”, NASA Lewis Research Center, Cleveland, OH (NASA TM-106524).

Liou, M.S.,1996,“A Sequel to AUSM: AUSM+”, Journal of Computational Physics, Vol. 129, pp. 364-382.

Lyra, P. R. M., 1994, “Unstructured Grid Adaptive Algorithms for Fluid Dynamics and Heat Conduction”, Ph.D.Thesis, Department of Civil Engineering, University of Wales Swansea, Swansea, Wales, U.K.

Mavriplis, D.J., 1988, “Multigrid Solution of the Two-Dimensional Euler Equations on Unstructured Triangular Meshes”, AIAA Journal, Vol. 26, No.7, pp. 824-831.

Mavriplis, D.J., 1990, “Accurate Multigrid Solution of the Euler Equations on Unstructured and Adaptive Meshes”, AIAA Journal, Vol. 28, No. 2, pp. 213-221.

Pulliam, T. H., 1986, “Artificial Dissipation Models for the Euler Equations”, AIAA Journal, Vol. 24, No. 12, pp.1931-1940.

Steger, J. L., Warming, R. F., 1981, “Flux Vector Splitting of the Inviscid Gasdynamic Equations with Application to Finite-Difference Methods”, Journal of Computational Physics, Vol. 40, No. 2, pp. 263-293.

Van Leer, B., 1979, “Towards the Ultimate Conservative Difference Scheme. V. A Second-Order Sequel to Godunov’s Method”, Journal of Computational Physics, Vol. 32, No. 1, pp. 101-136.

Van Leer, B., 1982, “Flux-Vector Splitting for the Euler Equations,” Proceedings of the 8th International Conference on Numerical Methods in Fluid Dynamics, E. Krause, editor, Lecture Notes in Physics, Springer-Verlag, Berlin, Vol. 170, pp. 507-512.

Page 26: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 153

Marcio Y. Nagamachi*Institute of Aeronautics and Space

São José dos Campos- [email protected]

Jose Irineu S. OliveiraInstitute of Aeronautics and Space

São José dos Campos- [email protected]

Aparecida M. KawamotoInstitute of Aeronautics and Space

São José dos Campos- [email protected]

Rita Cássia L. DutraInstitute of Aeronautics and Space

São José dos Campos- [email protected]

*author for correspondence

ADN – The new oxidizer around the corner for an environmentally friendly smokeless propellantAbstract: The search for a smokeless propellant has encouraged scientists and engineers to look for a chlorine-free oxidizer as a substitute for AP (ammonium perchlorate). Endeavors seemed to come to an end when ADN (ammonium dinitramide) appeared in the West in the early 1990s. Although some drawbacks soon became apparent by that time, the foremost obstacle for its use in rocket-motors came from the patent originally applied for in the United States in 1990. Furthermore, environmental concerns have also increased during these two decades. Ammonium perchlorate is believed to cause thyroid cancer by contaminating soil and water. In addition, AP produces hydrogen chloride during burning which can cause acid rain and ozone layer depletion. Unlike AP, ADN stands for both smokeless and green propellant. Since then, much progress has been made in its development in synthesis, re-shaping, microencapsulation and solid propellant. The high solubility of ADN in water has also allowed its application as liquid monopropellant. Tests have revealed Isp (specific impulse) superior to that normally observed with hydrazine, one of the most harmful and hazardous liquid propellants. With constraints of use, along with the patent near to expiry, scientists and engineers are rushing to complete developments and patents until then. Keywords: ADN, Ammonium dinitrate, Smokeless propellant, green propellant, oxidizer.

INTRODUCTION

This text is intended to provide an overview of ADN and a comprehensive description of the recent achievements in its development to-date. A brief history is given from its origin in the former USSR, as a key component in the production of smokeless propellants, up until the aftermath of the end of the Cold War, when it proved to be a strategic component for smokeless and green propellants. A description of recent advances is made while addressing issues that emerged during the attempts to replace AP (ammonium perchlorate) in solid propellants and hydrazine in liquid monopropellants. With the information contained here, it is hoped to assist readers in assessing the future of ADN in the rocket motor industry for the coming years.

Although many routes for ADN (ammonium dinitramide) synthesis have been proposed so far, the only effective one for large-scale production uses sulfamate salts and nitrating acid. The shape of ADN crystals as synthesized is far from being round. In this sense, two techniques stand out in re-shaping these crystals, one by a prilling process and the other by emulsion crystallization. The shape of crystals plays a major role in the ADN loading (%weight) in solid propellants, which also helps to increase the propellant

density. Microencapsulation is another breakthrough. After re-shaping the crystals into rounded grains, they are protected from wetting and reacting with the cure agent (chemical compatibility). Each particle is coated with a thin layer of polymer, which is also expected to reduce sensitivity and, to some extent, help to improve stability. Two methods are noteworthy, one by spray dryer and the other by a fluidized bed. Despite all the achievements so far, scientists continue to search for an ideal stabilizer. ADN-based propellants have exhibited higher burning rate with both binders, HTPB (hydroxyl-terminated poly-butadiene) and GAP (glycidyl-azide polymer). The ADN/HTPB-based propellants used to exhibit poorer mechanical properties when compared to the ADN/GAP-based one. These properties are also affected by microencapsulation, which has led to the development of techniques which depend on the material from which this coating is made. In addition, some achievements have been made in developing liquid monopropellants based on ADN. This new propellant is meant to replace hydrazine in liquid thruster.

HISTORY

Bottaro (1993), originally introduced ADN to the public in the West by applying for a patent in 1990. The patent was granted to SRI (Standford Research Institute) in 1993 for this subject matter, which also claimed to have conceived and first synthesized the ADN molecule. Meanwhile, Pak

Received: 28/09/09 Accepted: 19/10/09

Page 27: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009154

Nagamachi, M.Y. et al.

(1993), of the LNPO Soyuz made public a paper at the AIAA Conference in the United States in 1993, in which he disclosed what is now accepted as being the origin of ADN in the former USSR (Union of Soviet Socialist Republics). One year later, Tartakovsky and Lukyanov (1994) re-asserted the Soviet origin of ADN at the Fraunhofer-ICT Conference in Germany in 1994. Nowadays, it is acknowledged that ADN was first synthesized in the early 1970s in the USSR. After going through developments in the 1970s and 1980s, the Soviets started producing tons of ADN for tactical-rocket/missile during the 1980s. In the meantime, the Soviet Union’s economy was growing worse during that decade, and finally disrupted the Cold War in the late 1980s. In the course of events, specialists in ADN moved abroad and lent assistance to local engineers. By that time, the Soviet’s plant was destroyed or dismantled and production was discontinued for good. In Sweden in 1996, FOI (Forskningsinstitut / Swedish Defense Research Agency) designed a new one-step process, and one year later, in 1997, Bofors inaugurated a plant and started producing ADN using the same FOI technology. SAAB AB purchased Bofors in 1999 and ultimately, Eurenco, a joint venture controlled by SNPE, took over AB Bofors in 2004. Thereafter Eurenco made ADN samples available for sale, which has stimulated engineers across the world to carry on developments. NASA/GRC (National Aeronautics and Space Administration / Glenn Research Center) has a requirement to produce ADN for solid rocket boosters. Even though NASA had started researching into ADN in 1990s, they interrupted this soon afterwards, due to the shortage of resources funded by AFOSR (Air Force Office of Scientific Research), and subsequently on account of the downturn in the American space program over the last two decades. However, in December 2008, they announced interest in acquiring items from FOI to produce ADN for the new ARES booster, the new vehicle that is meant to replace the Space Shuttle. Research into ADN is also under way in many countries e.g. Japan, India, China and more recently at AQI-IAE (Chemical Division - Institute of Aeronautics and Space) in Brazil.

BACKGROUND

Structure

Dinitramidic acid has a similar structure to ammonia. It has two NO2 groups instead of the two hydrogen groups in the ammonia molecule.

(1)

The two NO2 groups exert strong inductive effect on the central nitrogen. As a result, the remaining hydrogen detaches as hydronium cation.

(2)

Dinitramide anion is more stable when compared to the acid form. Bottaro et al. (1997), have attributed the additional stability to the existence of the three resonance structures shown below (3):

(3)

ADN is formed by one dinitramide anion and one ammonium cation. This salt is highly soluble in water at ambient temperature and slightly soluble at temperatures as low as -40º C. The dissociation is represented by:

NH4+N (NO2)2

- NH4

+ + N (NO2)2- (4)

Physical & Chemical Properties

ADN crystals grow as yellow/white needles. The density of the crystals is 1.885 g/cm3, lower than 1.950 g/cm3 of AP (ammonium perchlorate). Hygroscopicity and friction/impact sensitivities are higher than in AP. Unlike most oxidizers, ADN melts before decomposing. The melting point is around 93º C. ADN is chemically incompatible with isocyanate-based cure agents. The oxygen balance of ADN is 25.8 per cent, lower than the 34.0 per cent of AP. The apparent disadvantage is compensated by higher energy of formation (-125.3 kJ/mol) when compared to (-283.1 kJ/mol) of AP. In addition, the specific impulse of ADN-based propellants is higher than of AP-based one. Table 1 provides some properties.

Page 28: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 155

ADN – The new oxidizer around the corner for an environmentally friendly smokeless propellant

Table 1: ADN Properties

ADN

Morphology of crystal needles

Hygroscopicity high

Color of crystal yellow/white

Density (g/cm3) 1.81

Melting point (ºC) 92 – 95

Friction sensitivity (N) 64 - 72

Impact sensitivity (N.m) 3 – 5

Oxygen excess (%) 25.8

Formation energy (kJ/mol) -125.3

ADN DEVELOPMENTS

Synthesis

Schmitt et al. (1993) originally proposed a route for synthesizing ADN from ammonia. This is a straightforward way to perform it since the two compounds have similar structures (1). However, this nitration requires expensive nitronium salts or nitryl compounds, which makes this reaction only justifiable for scientific or academic purposes. These routes are often not proper for large-scale production or commercial ends due to the high cost of the nitrating agents. Some examples of the nitrating agents unveiled in this patent are:

(5a)

(5b)

(5c)

Nitronium cation NO2+ is always an intermediate in

nitration reactions. This cation is rare and cannot be confused with the nitrite anion NO2

-. Nitronium salts are efficient, but very costly (thousands of dollars per gram). On the other hand, Langlet et al. (1997) have reported the use of the nitrating acid for this purpose. Nitrating acid is a combination of fuming nitric acid and sulfuric acid, commonly used in the explosives industry. Nitrating acid is less expensive, which makes the production of ADN economically viable. The sulfuric acid catalyzes the nitronium formation, according to the reaction:

H2SO4 + HNO3 HSO4- + NO2

+ + H2O (6)

However, nitrating acid is not effective in nitrating directly from ammonia. In this sense, the breakthrough came with Langlet et al. (1997) They reported a method to synthesize ADN from sulfamic acid or sulfamate salts instead of ammonia. This route was first utilized by the Soviets in the former USSR. The structure of the sulfamate anion is:

(7)

The two hydrogen groups in the sulfamate anion are more liable to react than the ones in the ammonia molecule. This route has been called “the one-step reaction”. The reaction is carried out in a severe acidic medium, which results predominantly in dinitramidic acid.

(8)

The dinitramidic acid in this medium is not as stable as the dinitramide anion, since it is more likely to decompose to AN (ammonium nitrate). For this reason, Vörde et al. (2005) proposed a method to convert dinitramidic acid into ADN. The reaction is represented by:

Page 29: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009156

Nagamachi, M.Y. et al.

(9)

Recrystallization

ADN crystals grow as long needles, which makes their re-shaping crucial. The shape can affect the end-of-mix viscosity as well as the mechanical properties of propellants. The shape also helps to increase the loading (%weight) of oxidizer in propellants. ADN melts down at about 93ºC, much lower than its decomposition temperature at about 124ºC. This feature has been used to re-shape its crystals. The high level of purity is an important condition when applying the following re-shaping techniques.

• Prilling tower

In this method, melted ADN is sprinkled at the top of the prilling tower, as shown in the Fig. 1. ADN droplets come into contact with the cooling nitrogen flowing upward in counter-current as these droplets come down through the tower. This was the pioneering method to obtain ADN prills. Highsmith8 et al. have described this technique for ADN.

a liquid phase. Pure ADN crystals are poured into a reactor with non-polar liquid. The mixture is continually stirred to achieve homogeneous dispersion. The suspension is heated up until the crystals melt down to form an emulsion. The emulsion continues to be stirred until the droplets achieve the proper size. The emulsion is then cooled down and the droplets become rounded grains.

• Spray crystallization

This technique has been widely applied to obtain rounded particles of AN (ammonium nitrate). Heintz and Fuhr (2005) originally devised the application of this technique for ADN in 2005. Johansson et al. (2006) et al. developed this technique for ADN with further modifications. Melted ADN is sprayed through an atomizer in a liquid nitrogen vessel. This method is still under development for ADN, and the benefits reside in the compactness of the equipment when compared to the prilling tower.

Microencapsulation

Encapsulation is a technique aimed at protecting ADN grains from wetting and reacting with the cure agent. The method consists of coating the surface of each particle with a thin layer of polymer. ADN is hygroscopic and chemically incompatible since it can react with isocyanate-based cure agents normally employed in binders. It is usual for suppliers to coat ADN with a thin layer of wax to give protection for shipment and storage. However, such a material is not suitable for use in propellants, and therefore it needs to be washed out before the surface receives a polymer coating. This technique is called microencapsulation.

• Coacervation

Initially ADN particles are poured and dispersed in an emulsion consisting of ethylcellulose in cyclohexane. The dispersion is stirred until the droplets of ethylcellulose gather around each particle and merge to form a continuous layer on its surface. The process is induced by pH, temperature and the addition of a third component. The dispersion is cooled down and the layer is left to harden. The coated ADN is separated and dried. Heintz and Teipel (2000) have described this method.

• Fluidized bed

In this method, ADN particles are kept floating in a chamber by air flowing upwards as shown in Fig. 2. The polymer is sprayed from either the bottom (Wurster) or top of the fluidized bed chamber. The particles are kept this way until the coating gets stiff. Heintz et al. (2008) have reported some results obtained by employing this method with different polymers normally used as binders in propellants.

Figure 1: Schematic representation of the prilling tower

• Emulsion crystallization

Teipel et al. (2000) described this technique for re-shaping ADN crystals into rounded grains by emulsifying ADN in

Page 30: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 157

ADN – The new oxidizer around the corner for an environmentally friendly smokeless propellant

• Supercritical fluidized bed

Nauflett and Farncomb (2002) introduced a modification for the fluidized bed method. Air is replaced by carbon dioxide due to the supercritical properties that this gas can exhibit at moderate temperatures and high pressure. Supercritical fluids exhibit low viscosity, similar to gases, and high densities, comparable to liquids. These properties make supercritical gases more suitable for this process. However, the chamber walls must be reinforced to withstand high-pressure operations.

• Spray dryer

Spray dryer is usually applied to smaller particles (< 50 micron) in which the fluidized bed method is not applicable. In this technique, the resin and ADN grains are dispersed and sprayed into a vessel by means of an atomizer. A thin layer of coating forms around the surface of each particle. The coated particles become solid as the solvents vaporize. Fig. 3 shows schematically this technique.

as cure agent for binders in solid propellants. In this sense, Keicher et al. (2008) reported a new method for curing GAP with an isocyanate-free curing agent. Pontius et al. (2008) also proposed another approach in this direction. However, a remarkable step forward in overcoming this obstacle was taken by microencapsulating ADN grains. Although microencapsulation is an elegant technique for this purpose, the search for an ideal coating material continues.

Any binder normally employed in solid propellants could be used, provided that the cure agent is chemically compatible with ADN. However, most of the curing process takes place through urethane links, which demand the use of isocyanate-based cure agents. Asthana and Mukundan (2002) have reported a new generation of energetic binders that holds azide groups (-N3) in their backbone viz. GAP (glycidyl azide polymer), PBAMMO (poly bis azido methyl oxetane), PNIMMO (poly nitrato methyl methyl oxetane), PGN (poly glycidyl nitrate), PAMMO (poly azido methyl methyl oxetane). On the one hand, PBAMO exhibits the best performance among them. On the other hand, it is solid and has to be combined with energetic copolymers or plasticizers to turn it into liquid. GAP was the first polymer to be successfully applied in propellants. In contrast to what occurs with AP (ammonium perchlorate), GAP exhibits better results with ADN than HTPB (hydroxyl terminated poly butadiene). GAP has higher enthalpy of formation than HTPB. In addition, it exhibits higher specific impulse as shown in the Fig. 4.

Figure 2: Schematic representation of the fluidized bed

Figure 3: Schematic representation of the spray dryer

Figure 4: Specific impulse comparison

Solid Propellant

ADN is compatible with single C-C, double C=C or even C-H bonds as pointed out by Teipel (2005). Nonetheless, ADN is incompatible with isocyanates, normally employed

The higher oxygen excess of GAP shifts the maximum point to about 80 per cent of oxidizer, whereas HTPB has maximum point close to about 90 per cent. The lesser amount of oxidizer makes GAP-based propellants less critical in terms of processing and casting. In addition, this propellant is likely to have enhanced mechanical properties. On the other hand, the lower oxygen excess of ADN shifts the maximum point to the right compared to the maximum of AP. Gadiot et al. (1993) et al. have reported theoretical performance calculations for ADN and GAP, PBAMO, PNIMMO and PGN as binders.

Page 31: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009158

Nagamachi, M.Y. et al.

Stability

Stabilizers are intended to prevent ADN from decomposing under increasing temperatures or through time. The decomposition mechanism is not completely understood, but it is largely accepted to occur through a chain reaction process. The decomposition occurs at lower temperatures (<100ºC) in a gradual and continuous way, mainly evolving ammonia NH3 and nitrous oxide N2O. The decomposition becomes prominent even at temperatures as low as 60ºC. At higher temperatures (>100ºC), nitrogen N2, nitrous dioxide NO2 and nitrous oxide N2O are common gases evolved in one or more steps as discussed by Andreev et al. (2000). et al. The first step necessarily involves rearrangement of ADN. At moderate temperatures, ADN is converted to the less stable dinitramidic acid, usually catalyzed by acids. This reaction is believed to occur early on, regardless of the temperature to which ADN is subjected.

(10)

The dinitramidic acid formed in the first step continues to decompose since it is more labile. In the next step, dinitramidic acid decomposes and evolves nitrous oxide N2O. Russel20 et al. monitored the evolution of N2O during degradation tests with ADN. Nitrous oxide N2O and ammonia NH3 are typical gases evolved during ADN decomposition. However, the proportion between the two gases can vary remarkably, depending on the temperature, as pointed out by Yang et al. (2005). Nitric acid is also formed in this step.

(11)

At low temperatures (<100ºC), nitric acid and ammonia combine to produce ammonium nitrate AN.

ammonia nitric acid ammonium nitrate (12)

AN (ammonium nitrate) and HNO3 (nitric acid) are the common impurities capable of speeding up the rate of ADN decomposition. Santhosh and Ghee (2008) have proposed two additional steps for the decomposition at higher temperatures

(13a)

(13b)

The lack of a consistent theory has hampered the development of stabilizers. Attempts have been made based on the backgrounds in explosives or by trial and error. Organic bases are believed to play an important role in stabilizing ADN. Lobbecke et al. (1997) tested five compounds as shown in the Table 2.

Table 2: Comparison of stabilizers

Stabilizer Time (hours) at 10% weight loss

none 58

MgO 55

NaBO2 97

hexamethylenetetramine 149

nitrodiphenulamine 157

Arkadit II 248

Inorganic and organic bases have been tested; metal oxides are less effective when compared to organic amines. Arkadit II, usually employed as a stabilizer in explosives, turned out to be the most effective in this evaluation. Evidence points to organic bases as being overall the most effective among them. According to Teipel (2005), stabilizers should be incorporated directly into the raw material, or during the recrystallization process.

Liquid Monopropellant

Over the years, scientists have also been looking for a suitable substitute for hydrazine, one of the most hazardous and harmful liquid propellants. In this regard, HAN (hydroxylammonium nitrate) has been on trial since as early as the 1970s. Large amounts of HAN can be dissolved in water, which is the primary requirement for use as a liquid monopropellant. This monopropellant is more stable and storable than its hydrazine-based counterparts as reported by Schmidt and Gavin (1996). The solubility achieved in this case is as high as 96 per cent. Nonetheless, Morgan and Meinhardt (1999) pointed out recurring lower specific impulse for HAN-based

Page 32: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 159

ADN – The new oxidizer around the corner for an environmentally friendly smokeless propellant

monopropellants when compared to hydrazine, even with the addition of a third part of glycerol as a fuel.

In 1999, Van den Berg et al. (1999) et al. revealed the specific impulse of monopropellants consisted of ADN dissolved in water and alcohols. The specific impulses in both cases are higher than the ones achieved by either HAN-based monopropellants or hydrazine. Anflo et al. (2002) added a third part of fuel to the solution of water/ADN achieving monopropellants with higher specific impulse. Wingborg et al. (2005) have reported specific/density impulse of ADN/water/fuel as being 10 per cent and 60 per cent higher than for hydrazine, respectively. In 2009, Sjöberg and Skif (2009) disclosed LMP-103S, the ADN-based monopropellant with four components developed by Eurenco-AB Bofors. The formulation is shown in Table 3.

The outcome of the developments over the last two decades indicates that ADN will have an important role in the solid rocket industry, a function that HNF never achieved. The one-step reaction for ADN synthesis has made its large-scale production viable as well as its application in rocket motors. The new oxidizer is meant to comply with upcoming tighter environmental legislation, in addition to being smokeless from a strategic point of view. The low melting point of ADN has led to the development of efficient processes to make round particles, which resulted in propellants with high density. On the other hand, microencapsulation made ADN less hygroscopic, more compatible, less sensitive, and, to some extent, more stable. Nevertheless, development is far from over insofar as the search continues for an ideal stabilizer. In addition, ADN trials have shown its potential to replace hydrazine as a more storable and less hazardous and less harmful monopropellant. This text could go further in presenting more benefits of the application of ADN and its achievements. However, the intention of this report is to provide a snapshot of one of the most promising components for solid and liquid propellants in this century and its development to date.

REFERENCES

Andreev, A. B., Anikin, O. V., Ivanov, A. P., Krylov, V. K., Pak, Z. P., 2000, “Stabilization of Ammonium Dinitramide in the Liquid Phase”, Russian Chemical Bulletin, International Edition, Vol.49, Nº.12, December.

Anflo, K., Grönland, T. A., Bergman, G., Johansson, M., Nedar, R., 2002, “Towards Green Propulsion for Spacecraft with AND-Based Monopropellants”, 38th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, Indiana, USA.

Asthana, S. N., Mukundan, T., 2002, “Energetic Binder Systems for Advanced Propellants”, Advances in Solid Propellant Technology, McGraw Hill, India, pp.61-86. March 20.

Bottaro, J. C., Schimitt, R. J., Penwell, P.E.; Ross, D.S., 1993, “U.S. Patent Nº 5198104”, March, 20.

Bottaro, J.C., Penwell, P.E., Schmitt, R.J., 1997, “1,1,3,3- etraoxo - 1,2,3-Triazapropene Anion, a New Oxy Anion of Nitrogen: The Dinitramide Anion and Its Salts”, J. Am. Chem. Soc., Vol. 119, pp. 9405-9410.

Gadiot, G. M. H. J. L., Mul, J. M., Meulenbrugge, J. J., Korting, P. A. O. G, Schnorkh, A. J., Schöyer, H. F. R., 1993, “New Solid Propellants based on Energetic Binders and HNF”, Acta Astronautica, Vol. 29, Nº.10-11, pp. 771-779.

Table 3: Formulation of ADN-base monopropellant

Monopropellant LMP-103S

Component weight %

ADN 60-65 %

methanol 15-20 %

ammonia 3-6 %

water

LMP-103S delivers 6 per cent more specific impulse and 30 per cent more density impulse than hydrazine. The recent developments in ADN-based liquid monopropellants render ADN also eligible as the potential substitute for hydrazine.

CONCLUSION

Half a century ago, ammonium perchlorate AP began to emerge as the potential component for strategic smokeless propellants in the middle of the Cold War. At that time, it sparked great interest due to its high performance, even though the resulting propellant never met requirements to be regarded as smokeless. In the 1980s, scientists started looking for a new generation of green propellants in response to rising environmental concerns. In this sense, HAN (hydroxyl ammonium nitrate) and HNF (hydrazine nitro-formate) were on trial in terms of examining their potential to replace hydrazine and ammonium perchlorate, respectively. HAN was ruled out for not achieving the performance of hydrazine. On the other hand, HNF had already been under development in the 1980s by ESA (European Space Agency) and Thiokol by the time ADN appeared. Even though HNF achieves higher performance compared to ADN with similar drawbacks, its application never came about due to its high cost.

Page 33: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009160

Nagamachi, M.Y. et al.

Heintz, T., Fuhr, I., 2005, “Generation of Spherical Oxidizer Particles by Spray and Emulsion Crystallization, VDI-Berichte, Vol. 1901, pp. 471-476.

Heintz, T., Pontius, H., Aniol, J., Birke, C., Leisinger, K., Reinhard, W., 2008, “ADN – Prilling, Coating and Characterization”, Proceedings of 39th International Annual Conference of ICT, Karlsruhe, Germany.

Heintz, T., Teipel, U., 2000, “Coating of Particulate Energetic Materials”, Proceedings of 31th International Annual Conference of ICT, Karlsruhe, Germany.

Highsmith, T. K., Mcleod, C. S., Wardle, R. B., Hendrickson, R., 2000, “Thermally Stabilized Prilled Ammonium Dinitramide Particles and Process for Making the Same”, US Patent 6136115 and WO Patent 99/01408.

Johansson, M., De Flon, J., Petterson, A., Wanhatalo, M., Wingborg, N., 2006, “Spray Prilling of ADN and Testing of ADN-Based Solid Propellants” Proceedings of 3rd Int. Conf. on Green Propellants for Space Propulsion, Poitiers, France.

Keicher, T., Kuglstatter, W., Eisele, S ., Wetzel, T., Krause, H., 2008, “Isocyanate-Free Curing of Glycidyl-Azide-Polymer (GAP) With Bis-Propargyl-Succinate”, Proceedings of 39th International Annual Conference of ICT, Karlshure, Germany.

Langlet, A., Ostmark, H., Wingborg, N., 1997, “Method of Preparing Dinitramidic Acid and Salts Thereof”, US Patent 5976483, FOI, Sweden.

Lobbecke, S., Krause, H., Pfeil, A., 1997, “Thermal Decomposition and Stabilization of Ammonium Dinitramide (ADN)”, Proceedings of 28th International Annual Conference of ICT, Karlsruhe, Germany.

Morgan, O. M., Meinhardt, D. S., 1999, “Monopropellant Selection Criteria – Hydrazine and Other Options”, Proceedings of 35th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit, California, USA.

Nauflett, G. W., Farncomb, R. E., 2002, “Coating of PEP Ingredients Using Supercritical Carbon Dioxide”, Proceedings of 30th JANNAF Propulsion Dev. And Charac. Subcomm. Meeting, 13-19.

Pak, Z., 1993, “Some Ways to Higher Environmental Safety of Solid Rocket Propellant Application”, Proceeding AIAA/SAE/ASMEASEE, 29th Joint Prop. Conf. and Exhib., Monterey, CA, USA.

Pontius H., Bohn, M. A., Aniol, J., 2008, “Stability and Compatibility of a New Curing Agent for Binders Applicable with ADN Evaluated by Heat Generation Rate

Measurements”, Proceedings of 39th International Annual Conference of ICT, Karlshure, Germany.

Russel, T. P., Stern, A. G., Koppes, W. M., Benford, C. D., 1992, “Thermal Decomposition and Stabilization of Ammonium Dinitramide” Proceedings of 29th JANNAF Combustion Subcommittee Meeting, CPIA Publication, USA, 339-345.

Santhosh, G., Ghee, A. H., 2008, “Synthesis and Kinetic Analysis of Isothermal and Non-Iso-Thermal Decomposition of Ammonium Dinitramide Prills” Journal of Thermal Analysis and Calorimetry, Vol. 94, Nº. 1, pp.263-270.

Schmidt, E. W., Gavin, D. F., 1996, “Catalytic Decomposition of Hydroxylammonium Nitrate-based Monopropellants”, Olin Corporation, US Patent 5485722, USA.

Shmitt, R.J., Bottaro, J. C., Penwell, P. E. et al., 1993, “Process for Forming Ammonium Dinitramide Salt by Reaction Between Ammonia and a Nitronium Containing Compound”, US Patent 5316749, SRI International, USA.

Sjöberg, Per, Skifs, H., 2009, “A Stable Liquid Mono-Propellant based on ADN”, Proceedings of 40th International Annual Conference of ICT, Karlsruhe, Germany.

Tartakovsky, V.A., Lukyanov, O.A., 1994, “Synthesis of Dinitramide Salts”, Proceedings 25th International Annual Conference of ICT, Karlsruhe, Germany.

Teipel, U., 2005, ”Energetic Materials-Particle Processing and Characterization”, Wiley-VCH, Germany, pp. 19.

Teipel, U., Heintz, T., Krause, H., 2000, “Crystallization of Spherical Ammonium Dinitramide (ADN) Particles, Propellants, Explosives, Pyrotechnics, Vol.25, pp. 81-85.

Van Den Berg, R. P., Mul, J. M., Elands, P. J. M., 1999, “Monopropellants System”, European Patent 0950648(A1).

Vörde, C., Skifs, H., 2005, “Method of Producing Salts of Dinitramidic Acid”, WO/2005/070823, Sweden.

Wingborg, N., Larsson, A., Elfsberg, M., Appelgren, P., 2005, “Characterization and Ignition of ADN-based Liquid Monopropellants”, Proceedings of 41st AIAA/ASME/SAE/ASEE Joint Prop. Conf. and Exhib. AIAA 2005-4468, Germany.

Yang, R., Thakre, P., Yang, V., 2005, “Thermal Decomposition and Combustion of Ammonium Dinitramide (Review), Combustion, Explosion and Shock Waves, Vol. 41, Nº. 6, pp. 657-679.

Page 34: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 161

Adam S. CummingDefence Science and Technology Laboratory

Fort Halstead- United [email protected]

New trends in advanced high energy materialsAbstract: In the last twenty years military explosives and energetic materials in general have changed significantly. This has been due to several factors which include new operational requirements such as Insensitive Munitions (IM), but is also due to the availability of new materials and to new assessment and modelling techniques. These permit more effective use of materials and a more detailed understanding of the processes involved in applying the technology. This article will outline some of the effects in addition to taking a glance at what the future might hold.Keywords: Energetic materials, Insensitive munitions, Explosives

LIST OF SYMBOLS

ADN Ammonium dinitramide AP Ammonium perchlorate CL20 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12- hexaazaisowurtzitaneEURENCO European Energetics CorporationFOX 7 1,1-diamino-2,2-dinitroethyleneFOI Swedish Defence Research AgencyF of I Figure of InsensitivenessGAP Glycidyl Azide Polymer GUDN N-guanylurea-dinitramideHNF Hydrazinium nitroformateHMX Octahydro-1,3,5,7-tetranitro-1,3,5,7- tetrazocaneHTPB Hydroxy-terminated polybutadieneHTPE Hydroxy-terminated polyether HTCE Hydroxy-terminated caprolactone etherIM Insensitive Munitions I-RDX Insensitive RDXMSIAC Munitions Safety Information Advisory CentreNATO North Atlantic Treaty OrganizationNIMIC NATO Insensitive Munitions Information Centre Poly-NIMMO 3-nitratomethyl-3-methyloxetane polymerPoly- GLYN Nitratomethyl oxirane polymerPBX Polymer Bonded Explosives RS-HMX Reduced Sensitivity HMXRS-RDX Reduced Sensitivity RDXSTANAG NATO Standardization AgreementTATB 1,3,5-triamino-2,4,6-trinitrobenzeneTNT TrinitrotolueneUK United KingdomUK MOD United Kingdom Ministry of DefenseUN United NationsUSA United States of AmericaUS PAX Picatinny Arsenal Explosive

INTRODUCTION

There are always two factors that drive developments. They are interconnected but can exist separately. The first is new technology or technological developments and the second is new requirements. The convergence of the two produces what is called a killer application which drives developments and markets.

It may seem strange to apply such an analogy to Energetic Materials but in a limited way this has indeed taken place. Without new technology it would not be possible to meet new requirements or even define new options, but without an awareness of new needs the technology would languish unused.

Within Energetics the need to reduce the vulnerability of munitions, now coupled with the need to manage their life effectively has meant that technology such as Polymer Bonded Explosives (PBX) has been developed and applied. This was only possible with the technology available, though it inevitably produced more questions than it answered. Any IM policy requires that the risk to users be quantified, and this means that it is necessary to have sufficient understanding of the processes involved to be able to predict the response well enough to meet the immediate requirement for service. Naturally these requirements also alter with time and experience so that this too is a continuing activity and leads to far greater investment in basic science and modelling than might have been predicted a few years ago.

To these requirements must now be added the need to do more with less; to be precise in delivery and action and so maximise effect with minimal collateral damage. Such a set of requirements cannot be easily met with the technology that existed even ten years ago. It requires basic understanding of materials, both old and new, and understanding of the processes of performance, vulnerability and ageing, so that these can be used in predictive modelling. The aim is to understand both the

Received: 10/07/09 Accepted: 30/09/09

Page 35: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009162

Cumming, A. S.

materials and how to use them properly to design weapons. Such an understanding can also lead to cost reductions and the ability to use existing materials more effectively.

It is worth examining the different aspects in turn, starting with materials.

INGREDIENTS AND MATERIALS

Scientific curiosity will always drive research, even though this may not always be understood by those who wish to solve their immediate problems. New methods of producing materials and new materials arise from this research, and the ability to undertake these studies is important within the technology. However, there are constraints.

Any new material must fulfil a real need or provide options not previously available. Until about 1990 performance drove that requirement, with an increasing awareness of Insensitive Munitions requirements acting as a constraint.

The changes in the requirements which must include the continuing uncertainty over what the requirements should be has slowed this development and several materials that once seemed certain to be used are still awaiting application. Many have been produced on laboratory scale and several on pilot scale, but only a few have made it beyond that, into demonstrator programmes even if not into any fielded munitions.

Ingredients can be divided into two classes, solids and binders. Looking first at binder technology, while HTPB (Hydroxy-terminated polybutadiene) has successfully made the transition from composite rocket propellants to Polymer Bonded Explosives (PBX) and other similar materials have been employed in the same role (e.g. HTPE Hydroxyl-terminated PolyEther and HTCE Hydroxy-terminated Caprolactone Ether), the expected transition to energetic binders has stalled. At present, military needs can be met with the materials that are already in service and Glycidyl Azide Polymer (GAP) or either 3-nitratomethyl-3-methyloxetane Polymer (PolyNIMMO) or Nitratomethyl Oxirane Polymer (PolyGLYN) have not really moved beyond technical demonstrator status.

It is worth outlining the logic for their development. It became clear that TNT was too brittle to meet developing vulnerability requirements and the use of inert polymer binders was proposed as a way to remedy this defect, drawing on the extensive rocket propellant expertise. (It is worth noting that melt cast options are again being examined as they continue to be easy to use and meet several needs very well.) While the mechanical properties were improved, the level of solid required for maintaining the performance level was high, and the argument ran that

if energy were to be embedded in the polymeric chain then performance could be maintained while further improving mechanical properties at the same time.

Several materials were produced and studied, with many being able to produce explosives meeting UN transport class 1.6 (Extremely Insensitive Detonating Substance). However, the cost of the materials was high, and it was found possible to meet most current requirements with existing materials. Therefore at present these are still awaiting a system requirement. They are likely to have uses especially within high performance small warheads or specific types of rocket motor, but the requirement has not yet appeared or has not yet been sufficiently defined.

There is a similar story with solid fillers. CL20 ( 2 , 4 , 6 , 8 , 1 0 , 1 2 - h e x a n i t r o - 2 , 4 , 6 , 8 , 1 0 , 1 2 -hexaazaisowurtzitane) was produced in 1986 in China Lake and once the highest density form known to-date, the epsilon polymorph, was found, seemed likely to be used to replace HMX (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocane) where high performance was needed. It has around 15 per cent higher performance than HMX, and was formulated into a metal moving composition, LX19, for high performance shaped charges. This has 95.5 per cent by weight of CL20 and 4.5 per cent of Estane binder. Unfortunately, CL20 is a highly sensitive explosive. It has a Figure of Insensitiveness (F of I) of approximately 25 and high explosiveness, making it hard to use for low vulnerability compositions. It can, however, be desensitised and this has been achieved for the US PAX (Picatinny Arsenal Explosive) series of compositions as well as in rocket propellants (Balas, 2003).

CL20 is only one of the newly available solid ingredients. More recently FOX 7 (1,1-diamino-2,2-dinitroethylene) has been produced in Sweden by FOI (Swedish Defence Research Agency) and its production licensed to what is now Eurenco. FOX 7 (Karlsson, 2002) has roughly the same performance as RDX but appears to be much less sensitive. To-date the data confirm this and (Oestmark et al., 2006) have suggested that this is due to the graphite-like structure within the crystals, similar to that of TATB (1,3,5-triamino-2,4,6-trinitrobenzene), which allows a mechanism for internal flexibility. This may explain the vulnerability response and has also been noted in N-guanylurea-dinitramide, GUDN (Oestmark et al., 2006) sometimes known as FOX 12, which appears to offer similar vulnerability benefits, particularly in gun propellant formulations.

Other ingredients are older, particularly ammonium dinitramide (ADN), first synthesised in Russia in 1971 and employed as an oxidiser in rocket propellants. It has problems in use, being very hygroscopic and with low

Page 36: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 163

New trends in advanced high energy materials

melting point (91-93.5°C), and also has a low F of I of around 25, though it can be desensitised. However, it is one of the few possible replacements for ammonium perchlorate (AP), now seen as posing significant environmental problems in the USA. Whether it can be used or not is as yet unproven, but it was in Soviet service despite the problems. Its possible applications are not limited to propellants but have also great potential for underwater explosives.

The European Space Agency and others have invested heavily in HNF (hydrazinium nitroformate) as an alternative oxidiser. However, this has even more stability problems than ADN and has not yet been proven to be suitable for use. Work is still progressing, but without definitive proof of its suitability it is unlikely to be investigated further.

The properties of FOX 7 illustrate the current approach. It appears to offer satisfactory performance while allowing the production of new low vulnerability materials which can contribute to IM solutions. It is benefits such as these which will drive new materials into use.

Yet as work is performed in understanding these materials, the same techniques can be used to modify and improve existing ingredients. Being able to control the morphology and size of crystals is now possible in ways not available until recently, and some surprising results have been obtained. Work over a decade ago shared amongst the UK, Netherlands and Norway showed that producing spherodised materials affected the shock sensitivity as well as affecting the processing characteristics. Since then others have examined ways of manufacturing materials and the benefits obtained by greater control of particle size and shape.

I-RDX (Insensitive RDX), or more generally RS-RDX (Reduced Sensitivity RDX), has emerged in the last few years. The first, I-RDX was produced by what is now Eurenco (the name is a Eurenco trademark). The evidence suggested that this version of RDX was significantly less sensitive than traditional grades, and therefore could be used to reduce the vulnerability of munitions through less sensitive fillings, for example in PBXN-109. Other manufacturers offered similar products within a very short time to the extent that NATO AC326 arranged a round-robin (Doherty, 2006) managed by MSIAC (Munitions Safety Information Advisory Centre) with various versions of RS-RDX and attempts to determine what made it different and how to encapsulate that within a STANAG. The study indicated that there was no simple answer to the question but that the properties of the crystal – surface, density, voids and flaws all played some part. Research continues in several laboratories throughout the world with the result that a greater understanding of the

importance of such properties is being obtained. In the meantime RS-HMX (Reduced Sensitivity HMX) has been produced and is being offered by Chemring in Norway.

As it is unclear if these forms of existing materials do offer real vulnerability benefits both initially and during munition life, this is currently being examined and research is needed both on the materials and on the tools used to examine and assess them.

These materials are being applied within munition design as it has been developed in the West in the last 50 years. However, with the end of the Cold War, access was gained to the products of parallel research in Russia. The research was neither better nor worse, just different. Different materials had been developed and used, such as ADN, but more importantly different defeat mechanisms had been assumed, and blast for example was a far more important mechanism in Soviet weaponry than in NATO. It became clear that NATO did not know quite as much as it thought it did, and multiple programmes were started in order to examine mechanisms not previously considered.

This has led to the investigation of nano-materials, including nano-Aluminium produced by various processes; to studies of solid state reactions as a mechanism for rapid energy release; to the examination of blast mechanisms and non-ideal explosives for land weapons, and to looking again at many of the models and assumptions that formed the basis of munition design.

In addition entirely new routes are being investigated. Modelling predicted new very high performance materials such as polynitrogen and these are being sought experimentally. High nitrogen species of a more conventional type are being researched in Munich under the direction of Prof T Klapoteke.

While much of the work is being done in the US, FOI and Sweden have been particularly innovative. N5

+ was made at Edwards AFB by Karl Christie, but N5

- was detected by FOI. This is high risk, blue skies research, with no guarantee that any of these are really stable or that they will give the performance predicted. Already the figures quoted in some publications are being revised as greater understanding is obtained of the way such systems will behave. However, the predicted performance benefits make the gamble worth taking. One conclusion must be that there are other species worth researching: Fig. 1 (courtesy QinetiQ).

CHARACTERISATION AND ASSESSMENT

If we make new demands on our materials and require that they act with precision throughout their useful life,

Page 37: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009164

Cumming, A. S.

we need to understand them clearly and also be able to predict their behaviour adequately to allow for both risk management and optimum use. The same tools that can be used for these predictions should also be capable of being applied in the design of weapons systems and of energetic sub-systems to meet new demands.

Traditionally, much of the science has been based on tests that simulate accidents or based on the analysis of the factors that contributed to accidents. It has meant that every nation had a large database of national results which did not easily cross-reference with those in other countries. Often this was based on priorities derived from accidents and the attempt to prevent them being repeated.

Closer links amongst allies meant that it became essential to be able to understand and accept tests from different centres. One of the most important tasks within the NATO CNAD Ammunition Safety Group remit is the development of common tests, released as NATO STANAGs, and the assessment of capability gaps which need filling. This was paralleled in the research area by closer collaboration on developing the tools and understanding needed to provide the means of better risk management. Major accidents such as that on USS Forrestal in the Vietnam War, Sir Galahad in the Falklands and Camp Doha in the First Gulf War emphasised the need for munitions that did what they were supposed to but

were otherwise relatively inert, Insensitive Munitions! The development of the tools and materials to provide these has driven much of the research programme for the last 20 years (MSIAC, 2006).

As part of the coordination exercise, NATO created the NATO Insensitive Munitions Information Centre (NIMIC), where a small group of experts could support work in the national programmes through advice and analysis of available information. This was successful despite limitations on the information at their disposal and it took over preliminary protocols for problem analysis developed by the US, Canada, the UK and Australia using them to support the analytical development of an approach to IM. The UK has been and remains an active member of NIMIC, and now MSIAC (Munitions Safety Information Analysis Center), the successor with a broader remit. It remains important to UK forces that our allies have munitions as close as possible to our standard of vulnerability, since we will often work closely with them in joint operations (MSIAC, 2006).

Nevertheless this approach is only part of the story, and the tool development continues and seems to be accelerating. Detailed analytical examination of mechanisms such as blast has produced modelling tools that are more capable, though this again is driven by both need and technology and therefore

Tetraazacubane, N4 Detonation Velocity, 13.42 kms-1

Pcj, 770 kbar

Hexaazaprismane, N6 Detonation Velocity, 14.04 kms-1

Pcj, 933 kbar

Octaazacubane, N8 Detonation Velocity, 14.75 kms-1

Pcj, 1370 kbar

Hexanitrohexaazaadamantane Detonation Velocity, 10.1 kms-1

Pcj, 519 kbar

Tetranitrotetraazacubane Detonation Velocity, 11.25 kms-1

Pcj, 720 kbar

Figure 1: Detonic Characteristics of Nitrogen Species

Page 38: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 165

New trends in advanced high energy materials

has also partially become possible through computer development since computers can now do more, and so enable the models to get a little closer to acceptable reality. The diagnostic approach driven by models has meant that several questions have started to be answered, such as obtaining detailed physical and chemical properties; an understanding of what happens to a material under shock, and trying to establish what are the critical properties for predicting performance and vulnerability. As mentioned above, with these tools it becomes possible to begin to design materials for specific functions (Cumming, 2009).

Once there is an understanding of the basic components; their interactions and the way they behave with time, it is possible to develop validated tools for general use. Validation is important for while models can be seductive, they can only be approximations of reality and rely on the quality of the real measurements. It is equally important that the two groups, modellers and experimentalists interact to provide a continuous check on the direction and usefulness of both models and experiments. In several countries this is being attempted and groups have been working on the problem of assessing and predicting hazard so that it is possible to deal more effectively with existing problems as well as prepare to answer the questions likely to be posed by tomorrow’s problems.

This approach can be cost effective and is being now pursued widely. With these types of tools it becomes possible to look again at ageing phenomena and provide support for Whole Life Assessment Policy development as well as the aforementioned design capability.

It is still necessary to undertake field tests and this is likely to remain the case for the near future. Any prediction requires validation and that can only be achieved with real tests. However the combination of small scale tests with the improved understanding of behaviour should make such tests increasingly confirmatory. It is clear, however, that this stage has not yet arrived. There are still important questions to be answered, in particular those associated with scaling factors in moving from small scale to large; rapid acceleration or deceleration, and the effect of ageing on munition response. That which passes IM tests may not pass after 5 years in storage. Properties change and the question cannot be answered sufficiently authoritatively to meet the requirements of a responsible owner, which is what the UK MOD aspires to be.

The increased interest in non-ideal explosives has identified areas where greater understanding is needed, and where civil experience can be used to fill the gaps. There is a need for a broader approach, to mutual benefit, and this seems to be developing.

It should be obvious from much of this discussion that collaboration in many forms plays a significant part in the research and assessment. Industry is trans-national, with munitions being equally trans-national, with the result that nations may be faced with similar assessment or procurement problems. It makes sense therefore to work together to develop a common technology approach and understanding especially since no one nation, not even the US, can afford to do all the necessary work alone. Networks already exist and are certain to develop.

CONCLUSION

Inevitably any review provides only a glimpse of the present position. It is possible to predict some of what will develop, but there will always be surprises. What is predictable includes continuing emphasis on reduced vulnerability; increased emphasis on life management and the minimisation of environmental impact, including recycling and the search for more benign materials. These are likely to drive the need for selected new or improved materials which assist in meeting requirements, and to drive studies on green munitions and the environment as outlined in the UK Defence Technology Strategy, an open document available on the Internet.

The need to provide flexible and precise performance in a cost-effective manner will need investment, but longer term options remain open. There are areas such as polynitrogen and non-traditional chemistry which should be investigated. Other scientific fields, such as materials science, can provide inspiration for new directions though links and the move away from traditional approaches.

Collaboration is also likely to increase, which may have the additional benefits of a broader base of experience and therefore, perhaps, a higher level of innovation.

The field is changing fast and in unpredictable ways, which makes it exciting while demonstrating that it is far from exhausted.

REFERENCES

Balas et al., 2003, “CL20 PAX Explosives Formulation Development, Characterisation and Testing”, Insensitive Munitions & Energetic Materials Symposium, San Francisco.

Cumming, A.S., 2009, “Results from Research Collaboration - A Review over 20 Years”, Propellants, Explosives, Pyrotechnics, Vol. 34, pp.187-193.

Page 39: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009166

Cumming, A. S.

Doherty, R.M., Nock, L.A., Watt, D.S., 2006, “Reduced Sensitivity RDX, Round Robin Programme-Update”, Proceedings of 37th International Annual Conference of ICT, Karlsruhe, Germany.

Halls, B., et al., 2006, “MSIAC: Core Products and Services”, Proceedings of 37th International Annual Conference of ICT, Karlsruhe, Germany.

Karlsson et al., 2002, “Detonation and Sensitivity Properties of FOX-7 and Formulations Containing FOX-7”, Proceedings of 12th Detonation Symposium, San Diego.

Ostmark, H., 2006, “N-Guanylurea-Dinitramide (FOX-12): A New Extremely Insensitive Energetic Material for Explosives Applications”, Proceedings of 13th Detonation Symposium, Norfolk-Virginia.

UK Crown Copyright Reserved 2009

Page 40: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 167

Elizabeth C. Mattos*Institute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Milton Faria DinizInstitute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Nanci M. NakamuraInstitute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Rita de Cássia L. DutraInstitute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

* author for correspondence

Determination of polymer content in energetic materials by FT-IRAbstract: A new methodology was developed to characterize and to quantify the polymer content in PBX (HMX/Viton) by Fourier Transform Infrared Spectroscopy (FT-IR), using the Termogravimetric analysis (TG) as reference techniques for the quantitative method. The quantification methodology, proposed by us, using the Fourier transform infrared-attenuated total reflectance (FT-IR/ATR) showed excellent results, being faster than the usual methodologies and can eliminate the generation of chemical residues. Keywords: Explosives, HMX, FT-IR, TG, ATR, Viton quantification.

INTRODUCTION

The development of space programs, deep oil well drillings, etc., has projected a need for “heat resistant” or “thermally stable explosives”. The main objective for using such explosives or explosive formulations is to support systems or applications, which must be reliable and secure at high temperatures (Agrawal, 2005).

An appropriate military explosive needs to be secure and of easy manipulation. It can be stored over long periods of time in different climates and difficult to detonation, at least under specific circumstances (Mathieu, 2004). The development of plastic bonded explosives (PBX) is a way to get more energy in less volume (Hayden, 2005).

PBX is an acronym for “Plastic Bonded Explosive”. A term applied for a variety of explosive mixtures, which is characterized by high mechanic resistance, good explosive properties excellent chemical stability, low sensitivity to the shock and manipulation and low sensitivity to thermal initiation (Federoff, 1966). A lot of the PBX’s have been developed since the middle of the last century, in general, resulting in an engineering material for specific or large scale applications (Hayden, 2005; Kim, 1999). These explosive mixtures have a large content of secondary explosives such as cyclotrimethylene trinitramine (RDX), cyclotetramethylene tetranitramine (HMX), hexanitrostilbene (HNS) or pentaerythritol-tetranitrate (PETN) in a composition with a polymeric matrix such as polyester, polyurethane, nylon, polystyrene, some rubbers, nitrocellulose and fluor polymers. The

compatibility and the miscibility with other explosives, polymers, and additives must be evaluated before pressing, molding, extruding, etc., for manipulation safety and for storing the products (Mathieu, 2005; Hayden, 2005).

As the weapons can be placed in aggressive thermal and mechanical environments, it is important to characterize the properties of the PBX in order to know its physico-chemical behavior (Thompson, 2005; Kasprzyk, 1999).

A method to obtain an explosive load is by pressing the explosive covered by hydraulical press, which represents the most important process for high-performance explosive loading manufacture (Wanninger, 1996). As the explosive is obtained in crystal form, it is necessary to recover them with polymers, performed by a covering process.

Different methods employing several techniques have been developed to determine HMX contents (Mattos-a, 2004). High performance liquid chromatography (HPLC) is used to investigate explosives to a large extent (Mattos, 2004). Although the HPLC method presents good results, there are several steps related to the sample preparation or measurements that are quite time consuming. It is possible to apply the Thermogravimetric Analysis for the quantitative determination of different energetic compounds in explosive compositions, as reported by Silva et al. (2008). FT-IR techniques can be applied for the characterization of the polymeric composition in PBX (Mattos, 2008). In a previous publication (Mattos, 2008 and Mattos, 2009), a sample of HMX covered with Viton in different concentrations was analyzed by ATR technique to obtain good quality spectra. In that specific case, the sample was kept in perfect physical contact

Received: 18/09/09 Accepted: 10/10/09

Page 41: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009168

Mattos E. C. et al.

with the crystal surface (Mattos-b, 2004). Several steps are eliminated and although the data of ATR methods represent the average of five analyses, the time spent in the analysis is less than the time spent in the HPLC method, as demonstrated in another publication (Mattos, 2009).

The covering process and the characterization of energy materials in the crystal form or after getting covered by different polymers are important research studies performed in the Institute of Aeronautics and Space (IAE) laboratories (Mattos-b, 2004; Mattos, 2002; Mattos, 2003). Thus, the aim of this work is to show a new quantitative method for the determination of the HMX/Viton ratio based on FT-IR/ATR, using the TG data like reference.

The advantages of the methods here presented are the few steps involved and the fact they do not require complex sample preparation.

EXPERIMENTAL

Materials

Fluoroelastomers are generically referred to as FKM polymers (nomenclature per ASTM D1418). Viton is a trademark of a fluoroelastomers series manufactured by DuPont, which is available in several formulations (copolymer, terpolymer) and forms (slab, stick, pellet) (Mattos, 2008; Hohmann, 2000). Fluoroelastomers are used in the covering of energetic materials. Viton shows excellent chemical stability, being a widely used polymer for energetic materials coating of propellants. The relationship between Viton and the oxidant compounds has significant influence in the oxidation time, as cited in the literature (Hohmann, 2000).

HMX is also known as 1,3,5,7-tetranitro-1,3,5,7- tetraza cyclo-octane, cycloteramethylene tetranitramine (C4H8N8O8) or octogen. It is an important ingredient in modern solid propellants due to its desirable properties, such as absence of smoke, high specific impulse and thermal stability (Tang, 1999). It is used in certain propellants and explosives (Kohno, 1994). Propellants based on HMX can be found in armament applications and in solid rocket propulsion systems (Tang, 1999).

The specific gravity of the crystals is 1.90 g/cm3 and the melting point is around 280°C. HMX exists in 4 polymorphic forms (αHMX, βHMX, dHMX, gHMX). The most common modification, stable at room temperature, is βHMX (United States-TM 9, 1979).

The HMX has been used in metal conformation, energy transference and in the composition of melting and pressing plastic bonded explosives (Calzia, 1969; Urbanski, 1984).

The plastic covering of explosive grains for “slurry” in water is characterized by the migration of the explosive, where the transfer of the explosive occurs from one liquid phase to another, in a liquid system with two immiscible phases (James, 1965; Benziger, 1973; Kneisl, 2003).

In the covering of HMX crystals by polymers, the explosive is dispersed in water (inorganic liquid phase) and the polymer dissolved in an organic solvent (organic liquid phase), the organic phase is usually called “lacquer”, which is a plastic dissolved in a low boiling point organic solvent (James, 1965; Benziger, 1973; Kneisl, 2003).

The covering process starts with the dispersion of the energetic material in water (system with moderate agitation and heating). The hot lacquer is added to the system, due to the low solubility of the lacquer in the system, the agglutination of the plastic plates occurs in the explosive crystals (James, 1965; Benziger, 1973; Kneisl, 2003).

The organic solvent is removed by means of the distillation process and then the formation of “pellets” occurs, due the agglutination between the polymer and the explosive crystals (James, 1965; Benziger, 1973; Kneisl, 2003). It is important to mention that the inorganic liquid phase (water) is very important for the security process, during the warming step of the system with explosive not yet covered and in the distillation step. It enables good temperature homogenization, preventing the system’s overheating (Kim, 1999; James, 1965; Benziger, 1973; Kneisl, 2003).

Measuring methods

Thermogravimetric analyses were carried out in a TG/SDTA Mettler model (Model TGA/SDTA851e), which was calibrated in a heat ratio of 10 ºC/min, with aluminum and indium, under nitrogen gas flow (40 mL/mim). About 2.0 mg of the sample was used and the heating range was between 50 and 700ºC.

The FT-IR spectra were recorded using a Spectrum One PerkinElmer. The conditions were: resolution 4 cm-1; gain 1; range 4000 to 700 cm-1, 40 scans, germanium crystal as reference material and incidence angle of 45°. The samples were analyzed by ATR germanium techniques. The sample was placed on both sides of the crystal to obtain a better spectrum.

RESULTS AND DISCUSSION

PBX samples with Viton and HMX was prepared (HMX/Viton systems) with the following Viton content: 2, 5, 8, 10, 13 and 15%, in order to develop a quantitative methodology FT-IR.

Page 42: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 169

Determination of polymer content in energetic materials by FT-IR

Table 1: HMX contents in the HMX/Viton samples obtained by TG

Code

Theoretical HMX/Viton

content (%)

HMX contents by TG

(%)

Viton contents by TG

(%)

Standard deviation

(%)

01/06 98/2 97.87 2.13 0.68

01/95 95/5 95.24 4.76 0.25

05/06 92/8 93.67 6.33 0.93

04/06 90/10 91.10 8.90 1.06

06/06 87/13 89.88 10.12 0.79

02/06 85/15 89.98 10.02 1.06

Figure 1: TG and DTG curves of HMX sample in atmosphere of N2

Figure 3: TG Curves of the system HMX/Viton sample in atmosphere of N2

TG Analysis of HMX

HMX (class 3) was used to attain the polymeric coverings, obtained in the SNPE (Société Nationale des Poudres et Explosifs), which shows the following thermal behavior in Fig. 1.

The Figure 3 clearly shows that the decomposition stages of the PBX constituent are quite different and do not influence the decomposition process. The contents

It can be observed that the HMX undergoes a sudden thermal decomposition and maximum mass loss occurs at around 285°C in relation to the DTG.

TG Analysis of VITON

To analyze the thermal behavior of Viton B, TG analyses were conducted. The thermal curve in Fig. 2 illustrates the thermal behavior of this polymer.

Observing Fig. 2 in comparison with Fig. 1, it is evident that the Viton decomposition takes place at a very high temperature of the HMX, allowing for the quantification of both materials, when these are in the PBX. The Viton mass loss in relation to the DTG can be observed at the temperature of 487.8 °C.

PBX characterization of PBX (HMX/Viton) by TG analyses

The samples cited in Table 1 for PBX (HMX/Viton) were analyzed by TG and Table 1 shows the results obtained. However the covering process with higher contents was not so efficient, according to Table 1, for samples 06/06 and 02/06.

Figure 3 shows the thermal decomposition curves of PBX (HMX/Viton).

Figure 2: TG and DTG curves of Viton B sample in atmos-phere of N2

Page 43: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009170

Mattos E. C. et al.

of the constituents can then be determined by means of TG, when associating the nominal content with the percentage decomposition of each stage of mass loss observed.

FT-IR/ATR of HMX sample

The main absorptions observed in the HMX spectrum FT-IR (Fig. 4), associates a probable attribution (Smith, 1979; Litch, 1970; Hummel, 1981; Bedard, 1962) are at 3035 cm-1 (ν CH2), 1564 cm-1 (νa NO2), 1462, 1432, 1396 and 1347 (ds CH2), 1279 and 1202 cm-1 (ν s NO2 + ν N-N), 1145, 1087 and 964 cm-1 (ν N-N + ν ring), 946 cm-1 (ring stretching), 830 and 761 cm-1 (d and g NO2). The observed bands are characteristic of β HMX, according to the literature (Smith, 1979; Campbell, 2000; Achuthan, 1990).

ATR analysis of the HMX/Viton

The combination of the infrared spectroscopy with the reflection theories produced advances in the surface analysis. The FT-IR/ATR technique combines the power of IR spectroscopy with the optics of the attenuated total reflection. The concept of internal reflection spectroscopy originates from the fact that radiation propagating in an optically dense medium of refractive index undergoes total internal reflection at an interface of an adjacent medium of lower optical density. This wave is termed evanescent (Nogueira, 2000; Kwan 1998; PerkinElmer 2005).

This evanescent wave penetrates only a few microns (0,5µ - 5µ) beyond the crystal surface and into the sample. Consequently, there must be good contact between the sample and the surface of the crystal. In the infrared spectrum regions, where the sample absorbs the energy, the evanescent wave will be attenuated or altered. The attenuated energy from each evanescent wave is passed back to the IR beam, which presents the opposite end of the crystal and is passed to the detector in the IR spectrometer. The system then generates an infrared spectrum (PerkinElmer, 2005).

By ATR technique with germanium crystal, using sample in both sides of the crystal, characteristic spectra were obtained for the Viton and the PBX, as can be observed in the sets of spectra in Fig. 6.

The samples of the HMX/Viton system were analyzed using different FT-IR techniques. However, it was observed that using the FT-IR/ATR technique with Germanium crystal at 45º showed spectra with better resolution (Mattos, 2009).

As observed in the spectra of Figure 6, the spectrum of the HMX/Viton (85/15%) sample presents characteristic bands of Viton in 1169 cm-1 and 890 cm-1. In this way, the best analytical band for the Viton is chosen.

All the covering samples (HMX/Viton) were analyzed by ATR with Germanium crystal at 45°. Therefore, with

Figure 4: FT-IR/ATR spectrum in crystal germanium of HMX

FT-IR/ATR of Viton sample

The presence of C-F groups is detected in Mid-Infrared (MIR) (Mattos, 2008; Urbanski, 1977) on the basis of the intense absorptions in 1397 – 1074 cm-1 (νC-F) (Mattos, 2008; Smith, 1979). The characteristic bands of linking C-F2 are found in 1273, 1191, 1134 and 1111 cm-1 (νC-F2) and 890 and 820 cm-1 (νC-F3) (Mattos, 2008; Silverstein, 1981), of average intensity.

The ATR/FT-IR technique can be conducted with two different crystals (germanium and KRS-5). The ATR technique with germanium crystal (sample in both sides of the crystal - 45°) shows good results, where it is possible to see the changes in the characteristic bands of the polymer (νC-F, νC-F2 and νC-F3) as a function of the polymer contents in the sample and good definition in the base line.

Figure 5 illustrates the characteristic spectrum for the Viton B sample.

Figure 5: FT-IR/ATR spectrum in crystal germanium of Viton B

Page 44: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 171

Determination of polymer content in energetic materials by FT-IR

observed that while the Viton concentration increases in the mixture, there is a widening in this region, indicating the presence of polymer in the energetic material composition of HMX/Viton. The band at 945 cm-1 characteristic for the HMX is also evidenced in this set of spectra. And while the Viton content increases, it is observed the reduction in the intensity of HMX band due the presence of the polymer.

Other characteristic Viton bands were chosen at 890 cm-1 and 1170 cm-1. For the HMX, the bands in 945 cm-1 and 1145 cm-1 can be observed. Since there are Viton bands that increase the intensity with the polymer concentration in the system, it was possible to initiate the quantification of these samples for FT-IR.

TG was used as reference for IR methodology in this system, (Table 1). Next, analytical IR bands for determining the Viton percentage in the HMX/Viton system were analyzed. The base line chosen for the bands at 945 and 890 cm-1 was: 986 at 853 cm-1, and for the bands of 1145 and 1170 cm-1: 1476 at 984 cm-1.

The absorbance values represent the median (m) of five analysis. According to Hórak (1978), it is recommended to work with median numbers when working with a low number of experimental values. It may happen that the values of parameters μ̂ and σ̂ thus determined are subject to larger errors. These errors are difficult to determine due to a non-uniform distribution of random errors in the set. Therefore, a difference assessment is made. The standard deviation, σ̂

m of

the median absorbance is calculated as follows [Hórak, 1978]: equation

σ̂

μ= σ̂

n (1)

where σ̂ is the assessed standard deviation of the basic set and is a quantitative measure of the precision for each individual measurement; n is the number of experiments.

σ̂ = KR⋅ R (2)

where KR is the coefficient for the calculation of the average standard deviation from the variation range (for five experiments, KR = 0.430) (Hórak, 1978); and R is the difference between the largest and the smallest element (Xn – X1). σ̂m

is an evaluation of the precision of this median, that is, an evaluation of the result obtained from the treatment of a finite set of measurements, which are repeated under completely identical conditions. The relative error for each sample was determined as follows:

relative error %( ) = σ̂μ

μ

⎝⎜

⎠⎟ ×100 (3)

Figure 6: FT-IR/ATR spectra using germanium crystal: Viton B and Cob. 02/06-85/15%

Figure 7: FT-IR/ATR spectra for HMX/Viton using germanium crystal with sample in both sides of the crystal at 45o

this technique better results were obtained for indicating the analytical band and quantitative determination, as represented in Fig. 7.

According to the set of absorption spectra FT-IR/ATR in Fig. 7, characteristic bands for the HMX and Viton can be observed. The band at approximately 1200 cm-1 is the characteristic groupings C-F existing in the Viton. It can be

Page 45: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009172

Mattos E. C. et al.

Analyzing Table 2, it is observed that the values of the bands at 890 and 1170 cm-1 must obey a linear relation. With the data from the termogravimetric analyses (reference method) and relative to the FT-IR analysis for Viton characteristic bands at 890 cm -1 and 1170 cm -1, it is possible to have the following linear relations for the content determination of Viton in agreement with the HMX/Viton, using relative bands method.

Viton content by TG/FT-IR using relative band

It is known that the relative bands method (relative absorbance) can be used for thickness correction of the specimens (Gedeon, 1985). Thus, for quantitative FT-IR analysis in HMX/Viton samples, the absorbance values at 945 cm-1 (HMX) were related to those corresponding to the absorption at 890 cm-1 (Viton), in an attempt to evaluate whether there would be an improvement in the precision of the methodology developed. With regards to the points of the base line at 986 cm-1, the 853 cm-1 was established for the calculations of absorbance values for the vibration absorption of the ring 945 cm-1 and C-F band at 890 cm-1. To apply the Lambert-Beer law in this analysis, the relative absorbance data of A945/A890 was placed as a function of explosive/plastic composition (percentage in weight), as presented in Table 3.

Figure 8 shows the relative graph of the medium values of A945/A890 as a function of the relation of HMX/Viton concentration (%m/m); a good linear relation is obtained (R=0.993). From the analytical curve obtained by FT-IR/TG analysis (Table 3), the following relation is proposed;

y = -3.95 + 0.7x (4)

where y is the median value of the relative absorbance (A945/A890) as a function of the relative concentration of HMX/Viton.

Table 2: Median absorbance values for analytical bands of the HMX/Viton system

HMX/Viton (%)

Median A945

HMX

Median A1145

HMX

Median A890

Viton

MedianA1170

Viton

98/2 0.060 0.032 0.002 0.008

95/5 0.041 0.032 0.005 0.017

92/8 0.027 0.042 0.006 0.035

90/10 0.034 0.040 0.007 0.032

87/13 0.030 0.049 0.010 0.044

85/15 0.031 0.064 0.014 0.059

Table 3: FT-IR data (relative band A945/A890) PBX samples of (HMX/Viton)-reference: data TG

HMX/Viton Content by

TG

Relative Band

A945/A890 (median)

Standard deviation

( σ̂)

Relative error (%)

45.95 28.50 6.50 22.81

20.01 9.60 1.11 11.56

14.72 4.28 0.15 3.50

10.24 4.62 0.16 3.46

8.88 3.00 0.12 4.00

8.98 2.21 0.07 3.17

Figure 8: Medians values of the relative absorbance A945 / A890 as a function of HMX/Viton content obtained by TG.

Table 4: FT-IR data (relative band A1145/A1170) of the PBX samples (HMX/Viton)-reference: data TG

HMX/Viton Content by

TG

Relative Band

A1145/A1170 (median)

Standard deviation

( σ̂)

Relativo error (%)

45.95 4.000 0.500 12.50

20.01 1.947 0.036 1.85

14.72 1.206 0.005 0.41

10.24 1.289 0.016 1.24

8.88 1.120 0.007 0.62

8.98 1.071 0.005 0.47

Table 2 shows the median absorbance values for analytical bands availed.

Page 46: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 173

Determination of polymer content in energetic materials by FT-IR

where y is the median value of the relative absorbance (A1145/A1170) as a function of the relative concentration of HMX/Viton.

With the analytical curves obtained (IR/TG) for the different IR bands used (relative band) the concentrations of Viton were calculated. The values of the linear correlation coefficients (R) were evaluated and are presented in Table 5.

Figure 9: Medians values of the relative absorbance A1145 / A1170 as a function of HMX/Viton content obtained by TG.

Table 5: Viton contents by TG and FT-IR technique (relative bands at 1145/ 1170 cm-1 and 945/ 890 cm-1)

Viton content By TG analysis

(%)

Viton content IR relative band 1145/1170 cm-1 TG reference

(%)

Viton content IR relative band

945/890 cm-1 TG reference

(%)2.13 2.11 2.10

4.76 4.68 4.90

6.33 8.36 7.82

8.90 7.71 7.53

10.12 9.24 9.13

10.02 9.82 10.18

Linear Correlation R = 0.991 R = 0.993

The data in Table 5 showed that the use of the relative band was the best adjusted for Viton content determination by FT-IR for PBX (HMX/Viton), presenting excellent results, that is, better coefficients of linear correlation in relation to the obtained values of Table 5. The best relative band was of 945/890 cm-1 using the TG technique as a reference technique, with a linear correlation of 0.993, and furthermore it’s possible to better visualize the bands in the spectra in Fig.7.

CONCLUSIONS

ATR technique with germanium crystal where the sample is placed in both sides of the crystal (45°) showed a better and more detailed spectra where it is possible to see the changes in the characteristic bands of polymer as a function of the content in the explosive mixture and a more defined base line;

All the covering samples of the HMX/Viton system obtained in laboratory were analyzed by ATR technique with Germanium crystal (45°), therefore they show better results for indicating the analytical band and quantitative determination;

The increase in intensity of the Viton bands with the increase in the polymer content in the system, enabled to initiate the quantification of these samples by FT-IR;

The use of a relative band showed to be adjusted for determining the Viton content by FT-IR for PBX (HMX/Viton), presenting excellent results. The best relative band was 945/890 cm-1, using the TG technique as a reference technique;

The use of the new FT-IR methodology for polymer quantification in PBX showed to be faster without generating chemical residues.

REFERENCES

Achuthan, C. P.; Jose, C. L., 1990, “Studies on Octahydro-1,3,5,7-Tetranitro-1,3,5,7-Tetrazocine (HMX) Polymorphism”, Propellants, Explosives, Pyrotechnics, Vol.15, pp. 271-275.

Agrawal, J. P., 2004, “Some New High Energy Materials and Their Formulations for Specialized Applications”, Propellants, Explosives, Pyrotechnics, Vol.30, Nº. 5, pp. 316.

Bedard, M. et al., 1962, “The Crystalline Form of 1,3,5,7-Tetranitro-1,3,5,7-Tetrazacyclooctane (HMX)”, Canadian Journal of Chemistry, Vol. 46, pp. 2278 – 2279.

Figure 9 shows the relative graph for the medium values of A1145 / A1170 as a function of the relation of HMX/Viton concentration (%m/m); a good linear relation is obtained (R=0.991). From the analytical curve obtained by FT-IR/TG analysis (Table 4), the following relation is proposed;

y = 0.344 + 0.0789x (5)

Page 47: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009174

Mattos E. C. et al.

Benziger, T. M., 1973, “High-Energy Plastic-Bonded Explosive”. U.S. Patent 3,778,319, 11 Dec., UNITED STATES ATOMIC ENERGY COMMISSION.

Calzia, J., 1969, “Les Substances Explosives et Leurs Nuisances”, Dunod, Paris, 344 p.

Campbell, M.S., Garcia, D., Idar D., 2000, “Effects of Temperature and Pressure on the Glass Transitions of Plastic Bonded Explosives”, Thermochimica Acta, Vol. 357-358, pp.89-95.

Federoff, B.F., Sheffield, O.E., 1966, “Encyclopedia of Explosives and Related Itens”, Dover, Picantinny Arsenal, Vol. 8.

Mathieu, J., Stucki, H., 2004, “Military High Explosives”, Chimia, Vol. 58, Nº. 6, pp.383-389.

Gedeon, B. J., Ngyuen, R. H., 1985, “Computerization of ASTM D 3677 – Rubber Identification by Infrared Spectrophotometry”, Meeting of Rubber Division, AC.S, Cleveland, pp. 1-12.

Hayden, D. J., 2005, “An Analytic Tool to Investigate the Effect of Binder on the Sensitivity of HMX-Based Plastic Bonded Explosives in the Skid Test”, Thesis, Master of Science Department of Mechanical Engineering, Institute of Mining and Technology, Socorro, New Mexico, 42p.

Hohmann, C., Tipton Jr, B., 2000, “Viton’s Impact on NASA Standard Initiador Propellants Properties”, NASA, Houston (Tech. Report, NASA/TP 210187).

Hórak, M., Vitek, A., 1978, “Interpretation and Processing of Vibrational Spectra”, John Wiley & Sons, New York, 414 p.

Hummel, D. O., 1966, “Infrared Spectra of Polymers: In Medium and Long Wavelengths Regions”, John Wiley & Sons, New York, 207p.

James, E., 1965, “The Development of Plastic-Bonded Explosives”, Lawrence Radiation Lab., Univ. of California, Livermore. (Technical Report UCRL-12439).

Kasprzyk, D. J. et al., 1999, “Characterization of a Slurry Process Used to Make a Plastic-Bonded Explosive”, Propellants, Explosives, Pyrotechnics, Nº. 24, pp.333-338.

Kim, H.S., Park, B.S., 1999, “Characteristics of the Insensitive Pressed Plastic Bonded Explosive DXD-59”, Propellants, Explosives, Pyrotechnics, Nº. 24, pp. 217-220.

Kneisl, P., 2003, “Slurry Coating Method for Agglomeration of Molding Powders Requiring Immiscible Lacquer Solvents”, U.S. Patent 6,630,040 B2, 31 Jan. 2002.

Kohno, Y. et al, 1994, “ A Relationship Between the Impact Sensivity and the Electronic Structures for the Unique N-N Bond in the HMX Polymorphs”, Combustion and Flame, Vol. 96, pp.343-350.

Kwan, K. S., 1998, “The Role of Penetrant Structure in the Transport and Mechanical Properties of a Thermoset Adhesive”, Ph.D. Thesis, Faculty of the Virginia Polytechnic Institute, Blacksburg, 285f.

Litch, H. H., 1970, “HMX (Octogen) and Its Polymorphic Forms”, Symposium on Chemistry Problems with the Stability of Explosives”, Tyringe, pp.168 - 179.

Mattos, E. C. et al., 2004, “Determination of the HMX and RDX Content in Synthesized Energetic Material by HPLC, FT-MIR, and FT-NIR Spectroscopies”, Química Nova, Vol. 27, Nº. 4, pp. 540-544.(a)

Mattos, E.C. et al., 2004, “Avaliação do Uso de Técnicas FT-IR para Caracterização de Cobertura Polimérica de Material Energético”, Polímeros: Ciência e Tecnologia, Vol. 14, Nº. 2, pp. 63-68. (b)

Mattos, E. C. et al., 2008, “Characterization of Polymer-Coated RDX and HMX Particles”, Propellants, Explosives, Pyrotechnics., Vol. 33, pp. 44 – 50.

Mattos, E. C. et al., 2009, “Determination of Polymer Content in PBX Composition by FT-IR”, Proceedings of 40th International Annual Conference of ICT, Karlshure.

Mattos, E. C. et al., 2002, “Aplicação de Metodologias FTIR de Transmissão e Fotoacústica à Caracterização de Materiais Altamente Energéticos” - Parte II, Química Nova, Vol. 25, pp. 722-728.

Mattos, E. C. et al., 2003, “Caracterização por FTIR de Coberturas Poliméricas de Materiais Energéticos”, Anais da Associação Brasileira de Química, Vol. 51, Nº. 4, pp. 132-135.

Nogueira, D. A. R., Rosa, P. T. V., Santana, C. C., 2000, “Adsorção de Proteínas na Superfície de Polímeros: Quantificação com FTIR/ATR”, I Brasilian Congress in Phase Equilibrium and Fluid Chemical Process Design, Águas de São Pedro, S.P., Brazil.

PERKINELMER, 2005, “FT-IR Spectroscopy, Attenuated Total Reflectance (ATR)”, TECHNICAL NOTE, Waltham (Catalogue).

Page 48: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 175

Determination of polymer content in energetic materials by FT-IR

Silva G., Mattos, E.C., Dutra, R.C.L., Diniz, M.F., Iha, K., 2008, “Determinação Quantitativa de TNT e HNS por TG e FT-IR”, Química Nova, Vol. 31, Nº. 6, pp. 1431-1436.

Silverstein, R. M., Bassler, G. C., Morril, T. C., 1981, “Spectrometric Identification of Organic Compounds”, 5 ed., John Wiley & Sons, New York.

Smith, A. L., 1979, “Applied Infrared Spectroscopy”, John Wiley & Sons, New York, pp. 286.

Tang, C. J. et al., 1999, “A Study the Gas-phase Chemical Strutcture During CO2 Laser Assisted Combustion of HMX”, Combustion and Flame, Vol. 117, pp. 170-188.

Thompson, D. G., Olinger, B., DeLuca, R., 2005, “The Effect of Pressing Parameters on the Mechanical Properties of Plastic Bonded Explosives”, Propellants, Explosives, Pyrotechnics, Vol.30, Nº. 6, pp. 391-396.

United States, 1979, “Departament of the Army. Military explosives”, Washington, DC, (TM 9-1300-214 C2).

Urbanski, T., 1984, “Chemistry and Tecnology of Explosives, Pergamon Press, Great Britain, Vol. 4, p. 391.

Urbanski, J. et al, 1977, “Handbook of Analysis of Synthetic Polymers and Plastics”, John Wiley & Sons, New York, 494 p.

Wanninger, P. et al., 1996, “Pressable Explosives Granular Product and Pressed Explosive Charge”, US Patents 5,547,526.

Page 49: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 50: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 177

Darci C. PiresInstitute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Denise V. B. Stockler-PintoInstitute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Jairo SciamareliInstitute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Jorge Roberto da Costa Institute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Milton Faria DinizInstitute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

Koshun IhaTechnological Institute of Aeronautics

São José dos Campos - Brazil [email protected]

Rita de Cássia L. Dutra.* Institute of Aeronautics and Space

São José dos Campos - Brazil [email protected]

* author for correspondence

Síntese e caracterização por espectroscopia no infravermelho de agente de ligação à base de hidantoína, utilizado em propelentes compósitosResumo: Reações para obtenção de derivados de hidantoína foram conduzidas a partir de aldeídos de cadeia curta e da 5,5-dimetilhidantoína. O acompanhamento das reações foi realizado por espectroscopia na região do infravermelho médio (MIR) por meio da formação de novas bandas características da estrutura do composto desejado. A análise MIR revelou que estas alterações espectrométricas ocorrem somente na reação com o formaldeído, indicando a formação do produto 1,3-bis(hidroximetil)-5,5-dimetilhidantoína, em presença de água. As bandas de absorção que confirmam a reação foram observadas em 3334 cm-1 (υ OH), 1770 e 1710 cm-1 (υ C=O) e em 1056 cm-1 (υ C-O), sendo esta última, atribuída ao grupo contendo hidroxila primária. A reação da 5,5-dimetil hidantoína com acetaldeído e com propanaldeído não ocorreu sob as condições adotadas neste trabalho.Palavras-chave: Hidantoína, Síntese, Caracterização, MIR, Propelente compósito.

Synthesis and characterization by infrared spectroscopy of hydantoin-based bonding agents, used in composite propellantsAbstract: Reactions to obtain hydantoin derivatives were carried out with 5,5-dimethylhydantoin and short-chain aldehydes. Monitoring of the reactions was performed using qualitative mid-infrared spectroscopy (MIR) through the formation of new bands characteristic of the desired product. MIR analysis showed that these spectrometric alterations occur only in the reaction with the formaldehyde, indicating the formation of the desired product, 1,3-bis (hydroxymethyl) 5,5-dimethylhydantoin, in the presence of water. The absorption bands that confirmed the reaction were observed at 3334 cm-1 (υ OH), 1770 and 1710 cm-1 (υ C=O) and 1056 cm-1 (υ C-O), the last of which is assigned to the group containing primary hydroxyl.Keywords: Hydantoin, Synthesis, Characterization, MIR, Composite propellant.

LISTA DE SÍMBOLOS

AP Perclorato de amônioAQI Divisão de QuímicaIAE Instituto de Aeronáutica e EspaçoIPDI Diisocianato de isoforonaIR Espectroscopia no infravermelhoMAPO Óxido de tris [1-(2-metil) aziridinil] fosfinaMIR Espectroscopia no infravermelho na região do

infravermelho médioHTPB Polibutadieno líquido hidroxilado

Tepan® Produto obtido da reação entre tetraetileno pentamina e acrilonitrila

Tepanol® Produto obtido da reação entre tetraetileno pentamina, acrilonitrila e glicidol

THF Tetra-hidrofurano υ Estiramento ou deformação axiald Deformação angular

INTRODUÇÃO

Propelentes sólidos do tipo compósito são constituídos por um aglutinante polimérico e uma carga sólida, composta por um sal oxidante e um combustível metálico, além de diversos aditivos. Dentre estes, destacam-se antioxidantes,

Received: 28/09/09 Accepted: 04/11/09

Page 51: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009178

Pires D.C. et al.

plastificantes, agentes de ligação e catalisadores de cura e de queima. Nesse trabalho, são focalizados os agentes de ligação, que são aditivos utilizados em formulações de propelente sólido compósito com a função de promover interação, de natureza física ou química, entre a matriz polimérica e a carga sólida. Falhas, nesta interação interfacial, podem afetar as propriedades mecânicas e balísticas do propelente, além de facilitar o ataque da umidade à superfície das partículas de perclorato de amônio (AP), comumente utilizado como oxidante (Villar, 2006; Sciamareli, 2002; Torry, 2000; Consaga, 1990). De maneira geral, os grupos funcionais que caracterizam um agente de ligação são grupos terminais polares com afinidade pelas partículas do oxidante, bem como grupos que apresentem compatibilidade com os grupos da matriz polimérica (Oberth, 1995; Consaga, 1980). Sendo assim, esses aditivos apresentam em sua estrutura, funções orgânicas amidas aziridínicas, hidroxiladas ou não, aminas di- ou tri-hidroxiladas, aminas ciano-hidroxiladas, amidas di-hidroxiladas, entre outras (Sciamareli, 2002).

Na literatura aberta, existem poucos trabalhos sobre a síntese de agentes de ligação e sua aplicação na indústria de propelentes compósitos, pois estes aditivos são considerados estratégicos nas formulações, sendo a maior parte das publicações encontrada na forma de patente.

Atualmente, existe uma tendência de substituição dos agentes de ligação aziridínicos à base de óxido de tris [1-(2-metil) aziridinil] fosfina (MAPO), por muito tempo empregados em formulações de propelentes, por outros à base de poliaminas. Apesar do bom desempenho funcional, os agentes de ligação aziridínicos apresentam como desvantagens o fato da sua obtenção depender da disponibilidade do MAPO, matéria prima importada e de alto custo, e de ser carcinogênico (Dundar, 2005). Entre os agentes poliamínicos, destacam-se o Tepan®, obtido da reação entre tetraetileno pentamina e acrilonitrila e o Tepanol®, produzido a partir da reação entre tetraetileno pentamina, acrilonitrila e glicidol. Estes agentes apresentam como principais vantagens a menor toxicidade e a ampla disponibilidade dos reagentes para sua síntese no mercado nacional, além de melhores características de processamento. Entretanto, as poliaminas apresentam como desvantagem o desprendimento de amônia que pode promover curas incompletas e resultar em propelentes com propriedades mecânicas não reprodutivas (Stockler-Pinto, 2008; Amtower, 2006; Dundar, 2005).

A literatura (Dundar, 2005; Consaga, 1980) relata, também, a aplicação de hidantoínas, como agentes de ligação em formulações de propelente sólido compósito. Estes compostos, bastante versáteis, foram empregados em formulações de propelentes de baixo impacto ambiental, quando o AP é substituído, parcial ou integralmente, por

outros oxidantes, tais como nitrato de sódio, nitrato de potássio ou nitrato de amônio (Sutton, 2001; Oberth, 1995; Davenas, 1993). Vale ressaltar que compostos aziridínicos e poliamínicos não apresentam bom desempenho nestes sistemas. Ademais, as hidantoínas parecem ser os únicos agentes de ligação compatíveis com oxidantes como HMX (ciclotetrametileno tetranitramina) e RDX (ciclotrimetileno trinitramina), além de não apresentarem restrições quanto à estocagem (Dundar, 2005; Consaga, 1980). Dundar (2005) sintetizou em escala de laboratório alquil hidantoínas e seus derivados mono e dialquilados. Alguns deles foram utilizados em formulações de propelente sólido, à base de polibutadieno líquido hidroxilado (HTPB) e diisocianato de isoforona (IPDI) e AP como agente oxidante. Os resultados de ensaios mecânicos, em diversas temperaturas, mostraram que a incorporação das hidantoínas às formulações resultou em melhora das propriedades mecânicas dos propelentes investigados. Exceto para três derivados, os demais compostos promoveram o fortalecimento da ligação entre sólidos e matriz polimérica. A ação dos agentes de ligação, neste caso, é atribuída à reação de complexação que ocorre entre a hidantoína e a matriz polimérica.

A hidantoína é um composto heterocíclico de cinco membros (Fig.1) com fórmula molecular C3H4N2O2, que corresponde ao 2,4-di-ceto-tetra-hidro-imidazol, embora seja, também, denominada imidazolidina-2,4-diona (Oliveira, 2008). A hidantoína foi descoberta por Bayer, em 1861, em meio a um trabalho com ácido úrico (Finkbeiner, 1965). A primeira fórmula estrutural para este composto foi sugerida por Kolbe em 1870, e modificada por Strecker, que propôs nova fórmula, até hoje aceita. Desde então, o núcleo hidantoínico e seus derivados tem sido estudados, tanto quanto às suas propriedades químicas, quanto biológicas, dada sua grande potencialidade como protótipo para o desenvolvimento de novos produtos, com diferentes aplicações (Oliveira, 2008).

Figura 1: Estrutura química da hidantoína

Com vistas à aplicação de compostos hidantoínicos como agentes de ligação em formulações de propelentes compósitos, um estudo vem sendo conduzido na Divisão de Química (AQI) do Instituto de Aeronáutica e Espaço

Page 52: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 179

Síntese e caracterização por espectroscopia no infravermelho de agente de ligação à base de hidantoína, utilizado em propelentes compósitos

(IAE), no sentido de promover modificações no anel hidantoínico, com a inserção de substituintes como grupos hidroxila (OH), compatíveis com a matriz polimérica (Uscumlic, 2003). Estes grupos reagem com grupos isocianato (NCO) da matriz polimérica durante a cura do propelente, resultando em uma eficiente adesão interfacial entre matriz polimérica e carga sólida e, em conseqüência, melhora das propriedades mecânicas do propelente (Oberth, 1995).

Dados da literatura (Batemam, 1980) sugerem que uma das formas mais efetivas de se introduzir grupos OH na molécula de hidantoína é por meio de reação com aldeídos de cadeia curta. A reação se passa em meio neutro e à temperatura ambiente. Neste trabalho, foram realizadas as reações da hidantoína com os três aldeídos de cadeia mais curta: formaldeído, acetaldeído e propanaldeído. Em tese, o produto da reação da 5,5-dimetilhidantoína com o formaldeído é o mais promissor porque as hidroxilas do produto resultante são primárias e apresentam menor impedimento estérico do que os produtos de reação com acetaldeído e propanaldeído. As hidroxilas primárias são importantes porque reagem mais rapidamente com os grupos NCO existentes no propelente do que a água. A reação de grupos NCO com água, presente no ar, acarreta perdas nas propriedades mecânicas do propelente.

Itoi (1992) relatou também a reação da hidantoína com aldeídos à temperatura entre 80-100°C, utilizando um catalisador alcalino como hidróxido de sódio ou carbonato de sódio.

Kormachev (1990) descreveu a obtenção da 1,3-bis hidroxietil-5,5-dimetilhidantoína contendo hidroxila primária, a partir do óxido de etileno. Entretanto, a presença deste reagente, gasoso e altamente inflamável à temperatura ambiente, é um fator que desestimula a utilização desta rota de síntese.

A espectroscopia no infravermelho (IR) tem se mostrado uma técnica eficiente para monitorar reações químicas (Smith, 1979; Uscumlic, 2006). Na AQI, esta técnica, já foi utilizada em pesquisas envolvendo matrizes e agentes de ligação para propelentes compósitos (Dutra, 1984, 2006, 2007, 2009) por meio de alterações espectrométricas (aparecimento, desaparecimento e deslocamento de bandas ou aumento ou diminuição de sua intensidade). Neste contexto, estudos para caracterização de agentes de ligação, em ampla faixa espectral IR, também têm sido desenvolvidos em nossos laboratórios (Pires, 2008, 2009).

O objetivo deste trabalho é o acompanhamento da reação entre hidantoína e aldeídos de cadeia curta na região do infravermelho médio (MIR).

EXPERIMENTAL

Para a execução deste trabalho foram utilizados os reagentes 5,5 dimetilhidantoina P.A. marca Fluka, grau de pureza 98%; tetra-hidrofurano (THF) P.A. marca Merck, grau de pureza 99,7%; formaldeído P.A. marca Synth, solução aquosa a 37%; acetaldeído P.A. marca Vetec, grau de pureza 99,5% e propanaldeído P.A. marca Aldrich, grau de pureza 97%.

A reação da 5,5-dimetilhidantoína, (Fig. 2) utilizando-se separadamente cada um dos três aldeídos investigados, foi conduzida nas condições descritas a seguir. O sistema de síntese foi montado utilizando-se uma placa de agitação, um balão de três bocas, um condensador de bola e um funil de adição.

Figura 2: Estrutura química da 5,5-dimetilhidantoína

Inicialmente, a 5,5-dimetilhidantoína foi solubilizada em tetra-hidrofurano (THF) e, em seguida, o aldeído foi introduzido, gota a gota, por meio de um funil de adição. O processo ocorreu à temperatura ambiente sob agitação constante.

Alíquotas foram retiradas para monitoramento do processo de síntese por meio de análise IR, nos seguintes tempos: ao término da adição do aldeído, após 1, 2, 5, 10, 30, 60, 90 e 120 min, após a extração do solvente THF.

Os espectros IR dos materiais de partida e das misturas reacionais foram obtidos, utilizando-se o espectrômetro Spectrum One da PERKINELMER nas seguintes condições: região espectral 4000 a 400 cm-1, resolução 4cm-1 e ganho 1, por meio de técnicas de transmissão. As amostras foram analisadas como filmes líquidos nos intervalos de tempo especificados.

RESULTADOS E DISCUSSÕES

Avaliação por MIR de sistemas para obtenção de agente de ligação à base de hidantoína

Em função do agente de ligação, à base de hidantoína, tratar-se de novo produto a ser sintetizado no Laboratório de Síntese da AQI, foi proposto, neste trabalho, o acompanhamento de

Page 53: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009180

Pires D.C. et al.

reação de obtenção dos derivados da 5,5-dimetilhidantoína, por MIR, sendo avaliados os sistemas, denominados, neste texto, de: formaldeído + hidantoína; acetaldeído + hidantoína e propanaldeído + hidantoína, para obtenção do referido agente de ligação, de acordo com metodologia de síntese proposta na literatura (Batemam, 1980).

Sistema formaldeído + hidantoína

De acordo com o Esquema 1, são esperadas, basicamente, as seguintes alterações espectrométricas: substituição do hidrogênio (H) do grupo NH (banda fina) da hidantoína por CH2OH (banda larga do grupo OH), ou seja, o aparecimento de hidroxila primária em, aproximadamente, 3300 cm-1 e, consequentemente, da ligação C–O simples, entre 1000 e 1100 cm-1. Também é esperado o deslocamento do grupo C=O para números de onda maiores.

sido atribuída tanto ao grupo C=O na posição C4 quanto ao C=O na posição C2. Por outro lado, estudos de espectros IR e Raman também têm relacionado essas bandas ao acoplamento simétrico e assimétrico entre as vibrações das carbonilas, da mesma forma que ocorre em imidas.

Esquema 1: Obtenção da 1,3-bis (hidroximetil) - 5,5-dimetilhidantoína

Na região em torno de 3300 cm-1, também deve ser considerada a interferência da banda de OH da água usada na solução do formaldeído, entretanto, o formato mais largo do OH da água de solução torna-se mais definido no espectro do produto final. De qualquer forma, as outras regiões, de grupos C=O e C–O podem ser usadas para identificação mais inequívoca da presença de derivado de hidantoína.

A Fig. 3 apresenta os espectros MIR do formaldeído, da 5,5-dimetilhidantoína e dos produtos de reação, em diferentes tempos. Pode ser observado no espectro MIR do produto final (120 min de reação), o aparecimento de duas bandas atribuídas (Smith, 1979; Oliveira, 2008) ao υ C=O, em torno de 1770 e 1710 cm-1, que situam-se em posições diferentes das bandas do υ C=O do formaldeído, em 1645 cm-1 e da hidantoína, em 1698 cm-1. Este aparecimento está associado à formação da estrutura (C), confirmada, também, por outras bandas em 3334 cm-1 de υ OH e em 1056 cm-1 de υ C-O, correspondente ao grupo contendo hidroxila primária. Em adição, pode ser também notado o alargamento de bandas na região de 1400 a 1500 cm-1, atribuídas à deformação (d) de grupos OH.

Com relação às bandas na região da carbonila, entretanto, há controvérsia na literatura a respeito da atribuição destas absorções (Oliveira, 2008). Alguns trabalhos associam estas bandas aos estiramentos (υ) das carbonilas nas posições C2 e C4. Entretanto, a banda de menor número de onda tem

Figura 3: Espectros MIR - Sistema formaldeído + hidantoína- A) formaldeído, B) 5,5-dimetilhidantoína, C) mistura reacional inicial, D) após 1 min, E) após 2 min, F) após 5 min, G) após 10 min, H) após 30 min, I) 60 min, J) 90 min e K) produto final de reação (120 min).

Entretanto, se for considerado o efeito de ressonância (Silverstein, 1981), que ocorre em grupos C=O e N–, acoplados, onde o par de elétrons do N é compartilhado com o carbono do grupo C=O, tornando a ligação dupla com caráter de ligação simples, pode-se sugerir que a C=O na posição C2, que tem a vizinhança de dois grupos N–H, deve ser a de menor número de onda, pois ligações simples situam-se em números de onda menores que as ligações duplas.

A Fig. 4 mostra que existe alguma semelhança entre o espectro MIR do produto final de reação e o encontrado na literatura (BIO-RAD Laboratories), sugerindo que, efetivamente, houve a obtenção de um derivado de hidantoína, provavelmente a 1,3-hidroximetil-5,5-dimetilhidantoína.

Sistema acetaldeído + hidantoína

Para o sistema acetaldeído + hidantoína, são esperadas as seguintes alterações espectrométricas (Esquema 2): substituição do H do grupo NH (banda fina) da hidantoína por CH3CHOH (banda larga do grupo OH), ou seja, o aparecimento de hidroxila similar à secundária, isto é, em

Page 54: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 181

Síntese e caracterização por espectroscopia no infravermelho de agente de ligação à base de hidantoína, utilizado em propelentes compósitos

A

B

Figura 4: Espectros MIR dos produtos de reação entre formal-deído e 5,5-dimetilhidantoína – A) Experimental, B) Literatura (BIO-RAD Laboratories).

Esquema 2: Obtenção de 1,3-bis (1-hidroxietil) - 5,5-dimetilhidantoína

A

B

Figura 5: Espectros MIR A) Acetaldeído PA VETEC B) Acetal-deído – espectro de referência (Pouchert, 1975)

vizinhança diferente do sistema contendo formaldeído em, aproximadamente, 3300 cm-1 e da ligação C-O simples, entre 1150 e 1100 cm-1 (Smith, 1979). Também é esperado o deslocamento do grupo C=O para números de onda mais altos.

É conhecida a forma enólica para cetonas (Morrison 1973; March, 1977) e/ou tautomérica para acetaldeído (Allinger, 1978). Ao se comparar o espectro MIR do acetaldeído PA, utilizado na reação, com o espectro de referência do mesmo composto (Pouchert, 1975) (Fig. 5), observam-se diferentes absorções, além das características de aldeído, provavelmente, atribuídas aos grupos OH, C=C e C–O, respectivamente em 3450, 1640 e 1150 cm-1, que podem estar relacionadas com a formação de estrutura similar à enólica ou tautomérica.

A Fig. 6 inclui os espectros MIR do acetaldeído, da 5,5-dimetilhidantoína e dos produtos de reação, em diferentes tempos. A observação do espectro MIR do produto final (120 min) revela a presença de absorções similares às encontradas no espectro da 5,5-dimetilhidantoína, o que evidencia,

Figura 6: Espectros MIR - Sistema acetaldeído + hidantoína- A) acetaldeído, B) 5,5-dimetilhidantoína, C) mistura reacional inicial, D) após 1 min, E) após 2 min, F) após 5 min, G) após 10 min, H) após 30 min, I) 60 min, J) 90 min e K) produto final de reação (120 min)

Page 55: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009182

Pires D.C. et al.

Esquema 3: Obtenção de 1,3-bis (1-hidroxipropil)-5,5-dimetilhidantoína

Figura 7: Espectros MIR - Sistema propanaldeído + hidantoína A) acetaldeído, B) 5,5-dimetilhidantoína, C) mistura reacional inicial, D) após 1 min, E) após 2 min, F) após 5 min, G) após 10 min, H) após 30 min, I) após 60 min, J) após 90 min e K) produto final de reação (120 min)

fortemente, que não houve reação, provavelmente, devido às condições inadequadas adotadas no processo de síntese. Sendo assim, é factível supor que seja necessário alterar as condições para que a reação possa ocorrer.

A reação foi realizada em presença de água, entretanto, o resultado foi similar, sugerindo que para o acetaldeído a reação de formação de derivado de hidantoína não ocorre.

Sistema propanaldeído + hidantoína

Com relação à este sistema, são esperadas as mesmas alterações espectrométricas, descritas para o sistema contendo o acetaldeído, associadas à formação de grupos funcionais característicos do produto desejado (Esquema 3).

Esquema 4: Hidratação de aldeídos

O estudo comparativo entre os espectros MIR de formadeído em meio aquoso e dos espectros de acetaldeído e propanaldeído revela que houve reação somente com o formaldeído aquoso. Duas considerações podem justificar a diferença de reatividade entre estes aldeídos: a primeira está relacionada com o efeito estrutura e reatividade. A reatividade do grupo carbonila (C═O) depende do caráter positivo do átomo de carbono. A carga positiva é alterada pelo efeito indutivo do substituinte em C1 na cadeia do aldeído; e o caráter da carga positiva é reduzida na seguinte ordem:

H2CO > RHCO

O radical alquila (R) exerce um efeito maior que o hidrogênio (H) na molécula do formaldeído, dificultando a reação.

A segunda consideração é o efeito da presença da água no sistema contendo o formaldeído, formando os hidratos, apresentados no Esquema 4 (March, 1977).

A Fig. 7 inclui os espectros MIR do propanaldeído, da 5,5-dimetilhidantoína e dos produtos de reação, em diferentes tempos. Pode ser observado, no espectro MIR do produto final (120 min), que as absorções são similares às encontradas no espectro da 5,5-dimetilhidantoína, o que sugere que não houve reação, provavelmente devido à menor reatividade da carbonila do propanaldeído. Na reação com o formaldeído, a estrutura hidratada,

contendo dois grupos OH, promove um efeito indutivo –I (March, 1977), que aumenta a carga positiva do carbono (C) facilitando o ataque eletrofilico do reagente sobre o substrato hidantoinico. Estas duas considerações se somam conduzindo a uma reação relativamente rápida e em condições brandas.

Os hidratos são provavelmente ainda mais estabilizados pelas interações do tipo ponte de hidrogênio, que são estabelecidas entre os grupos hidroxila e os átomos eletronegativos de oxigênio.

CONCLUSÕES

As reações da 5,5-dimetilhidantoína com aldeídos foram realizadas sob condições brandas. Tanto no caso do uso do acetaldeído quanto do propanaldeído, a análise por IR indicou que o produto final obtido é a 5,5-dimetilhidantoína, o que mostra, claramente, que a reação não ocorre segundo as condições empregadas. Por outro lado, a mesma análise IR mostrou que alterações espectrométricas ocorrem quando da reação com formaldeído, alterações estas que indicam a formação do produto desejado, no caso, o 1,3-bis (hidroximetil)-5,5- dimetilhidantoína, em presença de água.

Page 56: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 183

Síntese e caracterização por espectroscopia no infravermelho de agente de ligação à base de hidantoína, utilizado em propelentes compósitos

Assim, em vista dos resultados obtidos, o emprego do acetaldeído e do propanaldeído, nas condições estudadas, para obtenção de derivados de hidantoína, foi descartado. Este projeto terá continuidade somente com o processo de síntese da 1,3-bis (hidroximetil) 5,5-dimetilhidantoína sob condições brandas, entretanto, no futuro, processos modificados poderão ser objeto de outro estudo, associado à mesma linha de pesquisa.

AGRADECIMENTOS

Os autores agradecem ao IAE pelo apoio financeiro e incentivo à publicação de novas pesquisas e às secretárias, Laís Tereza Fabri e Solange de L. Ribeiro Camargo pela elaboração e formatação das figuras do trabalho.

REFERÊNCIAS

Allinger, N. L.; et al., 1978, “Química Orgânica”, 2 ed., Editora Guanabara Dois, Rio de Janeiro, Brazil, 961 p.

Amtower, II, 2006, “Propellant Formulation”, US Patent 7011722.

Batemam, J. H., 1980, “Hydantoin and Derivatives”, Kirk-Othmer, Encyclopedia of Chemical Technology, Vol. 12, Ed. Martin Grayson, Wiley-Interscience Publication, New York.

BIO-RAD Laboratories, Philadelphia, USA.

Consaga, J. P., 1980, “Dimethyl Hydantoin Bonding Agents in Solid Propellants”, US Patent 7011722.

Consaga, J. P, 1990, “Bonding Agent for Composite Propellants”, US Patent 4944815.

Davenas, A., 1993, “Solid Rocket Propulsion Technology”, Pergamon Press, London, 606 p.

Dundar, D., Gullu, M., Ak, M. A., Puskulcu, G., Yildirim, C., 2005, “Synthesis and Application of Bonding Agents Used in Rocket Propellants”, Proceedings of the 2nd International Conference on Recent Advances in Space Technologies, RAST, Istanbul, pp. 335-338.

Dutra, R. C. L., 1984, “Estudo de Reação de Polibutadieno Carboxilado com Aziridina Através de Espectrometria no Infravermelho”, Thesis, Universidade Federal do Rio de Janeiro, R.J., Brazil, 139 p.

Dutra, R. C. L., Oliveira, J. I. S., Kawamoto, A. M., Diniz, M. F., Keicher, T., 2007, “Determination of CHN Content in Energetic Binder by MIR Analysis”, Polímeros, Vol. 17, pp. 43-47.

Dutra, R. C. L., Oliveira, J. I. S., Diniz, M. F., Kawamoto, A.M., Keicher, T., 2006, “Characterization of Poly-AMMO and Poly-BAMO and their Precursors as Energetic Binders to be Used in Solid Propellants”, Propellants, Explosives, Pyrotechnics, Vol. 31, pp. 395–400.

Dutra, R. C. L., Pires, D. C., Kawamoto, A. M., Mattos, E. C., Diniz, M. F., Koshun, I., 2009, “Avaliação de Agente de Ligação Aziridínico por meio de Técnicas de Análise Química e Instrumental” , Journal of Aerospace Technology and Management, Vol.1, Nº. 1, pp. 55–61.

Finkbeiner, H., 1965, “The Carboxylation of Hydantoins”, J. Org. Chem., Vol. 30, Nº. 10, pp. 3414-3419.

Itoi, A., Omura, M., Ogata, H., Orukawa, A., Kageyama, T., Shimotochidana, M., “Preparation of dimethylolhydantoins from hydanotins”, Jpn. Kikai Tokkyo Koho, pp. 4.

Kormachev, V. V., Kolyamshin, O. A., Mitrasov, Y. N., Bratilov, B. I., Kozyrev, S. V., 1990, “Preparation of 1,3-bis (2-hydroxyethyl) 5,5-hydantoin”, Chuvash State University, CODEN: URXXAF SU 1555327 A119900407. USSR. SU 88-437208819880128.

March, J., 1977, “Advanced Organic Chemistry- Reactions, Mechanisms, and Structure”, 2. ed., McGraw-Hill Kogakusha Ltd., 1328 p.

Morrison, R. T.; Boyd, R. N., 1973, “Organic Chemistry”, 3. ed., Allyn and Bacon, Boston, 1258 p.

Oberth, A., 1995, “Bonding Agents for HTPB-Type Solid Propellants”, US Patent 5417895.

Oliveira, S.M., Silva, J.B., Hernandes, M.Z., Lima, M. C.A., Galdino, S.L., Pitta, I.R., 2008, “Estrutura, Reatividade e Propriedades Biológicas de Hidantoínas”, Química Nova,Vol. 31, Nº. 3, pp. 614-622.

Pires, D. C., Diniz, M. F., Dutra, R. C. L., Koshun, I., 2008, “Avaliação da Aplicabilidade da Espectroscopia NIR à Caracterização de Aminas em Agentes de Ligação Usados em Propelentes Sólidos”, VI Workshop em Física Molecular e Espectroscopia – ITA, São José dos Campos, S.P., Brazil.

Pires, D.C., Dutra, R. C. L., Mattos, E. C., Sciamareli, J., Koshun, I., Kawamoto, A. M., Diniz, M. F., Costa, J.R., 2009, “Evaluation of NIR Spectroscopy for Amine Characterization in Bonding Agents Used in Solid Propellants”, Proceedings of the 40th International Annual Conference of ICT, Karlsruhe, Germany, pp. 46-1 - 46-12.

Pouchert, C. J., 1975, “Aldrich Library of Infrared Spectroscopy”, Aldrich Company, 2. ed., 1575 p.

Page 57: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009184

Pires D.C. et al.

Sciamareli, J., Takahashi, M.F.K., Teixeira, J. M., Iha, K., 2002, “Propelente Sólido Compósito Polibutadiênico: I-Influência do Agente de Ligação”, Química Nova, São Paulo, Vol. 25, Nº.1, pp. 107-110.

Silverstein, R. M., Bassler G. C., Morril T. C., 1981, “Spectrometric Identification of Organic Compounds”, 4. ed., John Wiley & Sons, New York, USA, 442 p.

Smith, A.L, 1979, “Applied Infrared Spectroscopy”, John Wiley & Sons, New York, USA, 322 p.

Sykes, Peter, 1969, “Guia de Mecanismos da Química Orgânica”, Editora Livro Técnico S.A. e Editora da Universidade de S. Paulo Rio de Janeiro, R.J, Brazil, 302 p.

Stockler-Pinto, D.V.B., Rezende, L.C., Magalhães, J.B., Domingues, L.A.K., Vestali, I.N., Cruz, S.M., Leal, S.D., 2008, “Formulation Tailoring of AP/HTPB Composite Propellants Containing a Polyamine-type Bonding Agent” Proceedings of the 39th International Annual Conference of ICT, Karlsruhe, Germany, pp. 79-1 - 79-12.

Sutton G.P., Biblarz, O., 2001, “Rocket Propulsion Elements”, John Wiley & Sons., USA, 751 p.

Torry, S., Cunliffe, A., 2000, “Humid Ageing of Polybutadiene Based Propellants”, Proceedings of the 31th International Annual Conference of ICT, Karlsruhe, Germany, pp. 25-1 - 25-11.

Uscumlic, G. S., Kshad, A. A., Mijin, D. Z., 2003, “Synthesis and Investigation of Solvent Effects on the Ultraviolet Absorptions Spectra of 1,3-Bis-Substituted-5,5-Dimethylhydantoins”, J. Serb. Chem. Soc., Vol. 68, Nº. 10, pp. 699-706.

Uscumlic, G., Zreigh, M., Mijin, D., 2006, “Investigation of the Interfacial Bonding in Composite Propellants. 1,3,5-Trisubstituted Isocyanurates as Universal Bonding Agents”, J. Serb. Chem. Soc., Vol. 71, Nº. 5, pp. 445-458.

Villar, L. D., Stockler-Pinto, D. V. B., Rezende, L.C., 2006, “Effects of Humidity on the Mechanical and Burning Properties of AP/HTPB Composite Propellants”. Proceedings of the 37th International Annual Conference of ICT, Karlsruhe, Germany, pp. 89-1 - 89-12.

Page 58: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 185

Rafael F. Heitkoetter*Institute of Aeronautics and Space

São José dos Campos- [email protected]

Sérgio Frascino M. AlmeidaTechnological Institute of Aeronautics

São José dos Campos- [email protected]

Luís E. V. Loures da CostaInstitute of Aeronautics and Space

São José dos [email protected]

* author for correspondence

Simulação computacional da bobinagem filamentar não-geodésica de vaso de pressão de motor fogueteResumo: O principal objetivo deste trabalho foi realizar a simulação computacional no software CadWind® de uma bobinagem filamentar para um vaso de pressão aplicado a motor foguete A simulação realizada foi de uma bobinagem helicoidal não-geodésica, variando o ângulo de bobinagem na parte cilíndrica do vaso de pressão, devido às aberturas polares serem diferentes e vaso comprido. Também foram realizadas simulações variando o padrão de bobinagem. Os resultados indicam que um padrão de bobinagem mais elevado proporciona um maior ancoramento e distribuição das fibras nos domos.Palavras-chave: Bobinagem filamentar, Bobinagem não-geodésica, Vaso de pressão, Envelope-motor, Foguete.

Computational simulation of non-geodesic filament winding of pressure vessel of rocket motorAbstract: The main objective of this work was to accomplish the computational simulation in the software CadWind® of a filament winding for an applied pressure vase to motor rocket. The adopted simulation was of a non-geodesic helical winding, varying the winding angle in the cylindrical part of the pressure vessel, due the polar holes be different and the vessel long. Also simulations were accomplished varying the winding pattern. The results indicate that a more substantial winding pattern provides increased fiber deposition and distribution on the outer region of the domes.Keywords: Filament winding, Non-geodesic winding, Pressure vessel, Motor case, Rocket.

INTRODUÇÃO

Os propulsores dos estágios superiores dos veículos espaciais destacam-se, entre os componentes dos quais se obtém uma grande margem de benefícios, quando produzidos em materiais compósitos, pois os mesmos garantem uma boa relação entre redução de massa estrutural e ganho de carga útil, bem como o de se possibilitar a produção em tempo reduzido. Essa relação varia de foguete para foguete, mas pode partir de aproximadamente 1 kg ganho de carga útil para 80 kg de economia de massa em estágios iniciais, chegando a 1 kg para 1 kg no último estágio (Heitkoetter, 2009).

A bobinagem de fibras para a fabricação de envelopes-motores com materiais compósitos é uma técnica muito utilizada na indústria aeroespacial. Essa técnica consiste da bobinagem de um reforço, que pode ser uma fibra ou fita de material estrutural, em torno de um mandril, o qual dará a forma final para a peça.

O caminho de deposição do reforço pode possuir diversos padrões, dependendo dos esforços solicitantes na estrutura, assim como do seu formato final desejado. A possibilidade de se depositar as fibras, orientando-as de forma a atender às direções de tensão mais altas constitui uma das grandes vantagens do método de bobinagem, em que se consegue um melhor arranjo dos reforços na estrutura para se aumentar a resistência estrutural no local e direções necessárias.

Para uma bobinagem filamentar de vasos com aberturas polares iguais, utiliza-se uma bobinagem helicoidal geodésica isotensoidal, pois esse modelo fornece tensões nos filamentos e elimina o escorregamento durante a fabricação (Heitkoetter, 2009). Quando se tem aberturas polares diferentes pode-se utilizar uma bobinagem planar. Este modelo de bobinagem permite a fabricação de vasos com filamentos orientados primariamente na direção das tensões principais, contudo, quando os diâmetros das aberturas se tornam grandes ou compridos, esta bobinagem se torna instável e um excessivo escorregamento pode ocorrer (Heitkoetter, 2009).

Received: 29/09/09 Accepted: 14/10/09

Page 59: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009186

Heitkoetter, R.F., Almeida, S.F.M., Costa, L.E.V.L.

METODOLOGIA

A seqüência na simulação de bobinagem no CadWind® consiste nas seguintes fases: geração do mandril de bobinagem, entrada dos parâmetros dos materiais, escolha da técnica de bobinagem, determinação do ângulo de bobinagem, definição da trajetória fundamental (master path) através do ajuste do ângulo de bobinagem, escolha dos padrões recomendados e simulação da bobinagem.

Para a geração do mandril de bobinagem, foram inseridos os parâmetros geométricos já definidos em projeto, sendo utilizado como exemplo um envelope-motor S-30 do Veículo de Sondagem VSB-30, os parâmetros necessários são apresentados na Tab. 1 (Heitkoetter, 2009).

Nas Figuras 1 e 2, é apresentado o mandril de bobinagem em vista isométrica e vista lateral.

Parâmetro Valor inserido no CadWind®

Diâmetro da seção cilíndrica 542 mmComprimento parte cilíndrica 2426 mmDiâmetro da abertura polar do domo dianteiro 280 mm

Comprimento do domo dianteiro 185 mmPerfil do domo dianteiro ElípticoDiâmetro da abertura polar do domo traseiro 400 mm

Comprimento do domo traseiro 230 mmPerfil do domo traseiro Elíptico

Tabela 1: Parâmetros necessários para geração do mandril

a)

b)

Figura 1: Vistas isométricas do mandril de bobinagem. a) vista dianteira, b) vista traseira

Figura 2: Vista lateral do mandril de bobinagem

Parâmetro Valor inserido no CadWind®

Número de Rovings 06Largura do Roving 2,64 mmFração volumétrica da fibra 60 %Densidade linear 800 tex [g/km]Densidade da fibra 1,75 g/cm3

Densidade da resina 1,15 g/cm3

Tabela 2: Parâmetros de materiais

Depois de gerado o mandril de bobinagem o passo seguinte foi inserir no programa os parâmetros de materiais, no caso os parâmetros da fibra de carbono já impregnada, tais parâmetros são apresentados na Tab. 2 (Marinucci, 2001).

Sendo arbitrados os valores para largura do roving com 2,64 mm e para a quantidade de rovings igual a seis. Frações volumétricas de fibra na faixa de 40% à 60%, são recomendadas por norma (ASTM D-2243, 1995).

Depois de inseridos os parâmetros de materiais no CadWind®, o próximo passo foi escolher a técnica de bobinagem. E como as aberturas polares dos domos são diferentes, não cabe, neste caso, uma bobinagem helicoidal geodésica isotensoidal, a técnica escolhida tem que ser uma bobinagem não-geodésica.

O cálculo não-geodésico oferece uma gama mais larga de possibilidades de bobinagem, podendo ser usado para todas as geometrias de mandris, com exceção de mandris em T. Esta bobinagem pode definir a estrutura laminada para cada seção do mandril, ou seja, pode-se variar o ângulo de bobinagem ao longo da parte cilíndrica do mandril. Este caminho de bobinagem é calculado de acordo com o ângulo de bobinagem desejado levando em consideração o limite de escorregamento da fibra, que está relacionado com o fator de fricção entre a fibra e a superfície do mandril.

Esta fricção permite uma divergência da linha geodésica, de acordo com o fator de fricção. O CadWind® calcula a máxima divergência, antes da fibra começar a deslizar, e então otimiza positivamente a posição da fibra de acordo com o ângulo de bobinagem determinado e o padrão requerido.

O CadWind sugere como guia para fatores de fricção os valores apresentados na Tab. 3. (Material S/A, 2002).

Page 60: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 187

Simulação computacional da bobinagem filamentar não-geodésica de vaso de pressão de motor foguete

Superfície Fibra seca

Fibra molhada

Fibra pré-impregnada

Metal 0,18 0,15 0,35Plástico 0,20 0,17 0,32Laminado seco 0,22 - -Laminado impregnado - 0,14 -

Laminado pré-impregnado - - 0,37

Tabela 3: Fatores de fricção sugeridos

Figura 3: Exemplo de um master path. Fonte: Adaptado de MATERIAL S/A (2002)

1

2

3 4

6

5W

1

2

3 4

6

5W

1

4

2 5

6

3W

1

4

2 5

6

3W

5/1 -5/1

5/2 -5/2

Figura 4: Seqüência dos pontos de partida de um ciclo para: número de padrão / índice de salto de 5/1, -5/1, 5/2, -5/2.

O fator de fricção utilizado para esta análise foi de 0,14, de uma fibra molhada em uma superfície de mandril laminado impregnado.

Depois de escolhida a técnica de bobinagem, o próximo passo foi determinar o ângulo de bobinagem, conforme a condição de Clairaut, equação (1) (Zickel, 1962).

α = arc sen rR)( (1)

onde α é o ângulo de bobinagem, r é o raio da abertura polar do domo e R é o raio da parte cilíndrica (câmara de combustão do motor foguete) (Heitkoetter, 2009).

Aplicando os valores da tabela 1, na equação (1), obtém-se um ângulo de bobinagem de 31,10º, para o domo dianteiro e um ângulo de 47,56º para o domo traseiro.

Após determinado o ângulo de bobinagem, foi realizado o ajuste dos parâmetros de bobinagem através do master path, que corresponde ao primeiro ciclo de bobinagem para a determinação de um padrão, é o cálculo preliminar para o frame inicial, onde a bobinagem começa e termina no mesmo frame, os frames correspondem a cada divisão do mandril de bobinagem. Para melhor entendimento na Fig. 3 observa-se um exemplo onde são apresentados o frame inicial, o master path e o ponto padrão.

Figura 5: Número de padrão / índice de salto de 7/5

Um dos parâmetros a serem ajustados, é o número padrão que caracteriza o padrão de bobinagem, podendo ser positivo ou negativo. Quando se entra com um valor positivo o novo ponto de partida de um ciclo é à esquerda do anterior, e quando se entra com um valor negativo é à direita do anterior, ao lado do número padrão pode-se entrar com um valor de índice de salto (skip index). Ambos os valores são separados por uma barra “/ “. As Fig. 4 e 5 mostram o significado do número padrão e do índice de salto em exemplos diferentes, onde se pode ver também o frame inicial do mandril de bobinagem e os números que representam a seqüência dos pontos de partida dos ciclos. O termo W na Fig. 4 significa a transposição da fibra.

Page 61: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009188

Heitkoetter, R.F., Almeida, S.F.M., Costa, L.E.V.L.

Se o padrão desejado não pode ser realizado para determinadas condições, o CadWind® calcula uma tabela com recomendações para o número padrão. Na Figura 6, pode-se observar que, além dos números padrão, é exibido o número correspondente de ciclos e de grau de cobertura. Não está garantido que um padrão indicado pelo CadWind® possa ser calculado, os padrões ao topo da tabela de recomendações que o CadWind® calcula são os mais prováveis de serem realizados.

Obviamente o master path não poderá ser alcançado com a primeira combinação de parâmetros, assim os parâmetros de bobinagem precisam ser alterados em ordem, para alcançar o ponto padrão e começar o cálculo para todo o ciclo de bobinagem para um padrão inteiro e exibi-lo.

Os outros dois parâmetros a serem ajustados são o grau de cobertura e o número de ciclos. O grau de cobertura determina a distribuição de fibra ao redor do mandril. Sendo que 100% de cobertura, para o frame com maior circunferência, as fibras são colocadas sem sobreposição e sem brechas, já entrando com um valor mais alto pode-se ter uma sobreposição, utilizando como exemplo, 200%

Figura 6: Exemplo de tabela de recomendações de padrões de bobinagem

b

ΔZ

Figura 7: Grau de cobertura >100% em mandril cilíndrico onde b é a largura de banda e ΔZ o passo de bobinagem.

de cobertura correspondem a uma sobreposição de meia largura de banda. E entrando com um valor menor podem ocorrer brechas, para o exemplo de 50% de cobertura correspondem a uma brecha de uma largura de banda.

Com a entrada do número de ciclos consegue-se fixar o primeiro e o último número da deposição. O grau de cobertura resulta do padrão de bobinagem calculado. Na Figura 7, observa-se exemplo de grau de cobertura maior que 100% em mandril de bobinagem cilíndrico.

Como os ângulos de bobinagem são diferentes nos domos, a solução adotada foi que ao longo da parte cilíndrica do mandril o ângulo varie, ou seja, variando o ângulo de bobinagem do domo dianteiro até chegar ao valor do ângulo de bobinagem do domo traseiro, durante a bobinagem na parte cilíndrica. Mas para a variação entre os ângulos de bobinagem calculados de 31,10º e 47,56º, o master path não conseguia tangenciar o flange metálico de ancoramento das fibras e foi preciso realizar várias iterações para se chegar a um master path que tangenciasse o flange, sendo possível somente para a um ângulo de 23º no domo dianteiro, e 41º no domo traseiro, variando o ângulo de bobinagem ao longo da parte cilíndrica a cada 1/10 de sua extensão em 1,8º.

Nas Figuras 8 e 9, são apresentadas para o master path, a vista isométrica e as vistas frontais dos domos, para a solução de bobinagem não-geodésica para os ângulos de bobinagem mencionados acima.

Após a bobinagem do master path e gerada a tabela com os padrões recomendados pelo CadWind foram realizadas duas simulações de bobinagem não-geodésica, pelo método de bobinagem cruzada.

Dependendo do padrão escolhido, o método de bobinagem pode ser por superposição de camadas, ou por bobinagem cruzada.

A bobinagem cruzada (interweaving) consiste numa estrutura entrelaçada onde cada cobertura completa do mandril

Page 62: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 189

Simulação computacional da bobinagem filamentar não-geodésica de vaso de pressão de motor foguete

Figura 8: Vista isométrica do master path

Figura 9: Vistas frontais do master path. a) domo dianteiro, b) domo traseiro

representa duas camadas de material, sendo estas posicionadas a +α e a –α, em relação ao eixo longitudinal do mandril.

Já a bobinagem por superposição de camadas, permite que as camadas posicionadas a +α e a –α sejam bobinadas de modo independente, pois na primeira etapa do processo todo o mandril é recoberto com a camada, por exemplo, –α, sendo que somente após estar totalmente completa é que há o início da camada seguinte, a qual será orientada na posição

Parâmetros de bobinagem Padrão alto Padrão baixo

Ângulo de bobinagem do domo dianteiro 23º 23º

Ângulo de bobinagem do domo traseiro 41º 41º

Variação do ângulo de bobinagem 1,8º 1,8º

Número padrão / Índice de salto -25/18 7/5

Número de ciclos 107 102Grau de cobertura 108 103

Tabela 4: Parâmetros para bobinagem não-geodésica

Figura 10: Vista isométrica frontal da simulação de bobinagem com padrão de bobinagem alto

+α, permitindo que o reforço seja colocado em camadas. Esta técnica apesar de ser qualificada como bobinagem, apresenta características do processo de laminação.

Neste trabalho todas as simulações de bobinagem foram realizadas pelo método de bobinagem cruzada.

RESULTADOS

Foram realizadas duas simulações, sendo uma com um padrão de bobinagem alto, ou seja, com o grau de cobertura, e a outra com um padrão de bobinagem baixo. Na Tabela 4, são apresentados os parâmetros de bobinagem selecionados.

Nas Figuras 10 e 11, são apresentadas as vistas isométricas do envelope-motor, e na Fig. 12 as vistas frontais dos domos, para a simulação de bobinagem não-geodésica com padrão alto, conforme parâmetros da Tab 4.

Page 63: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009190

Heitkoetter, R.F., Almeida, S.F.M., Costa, L.E.V.L.

a)

b)

Figura 12: Vistas frontais dos domos para simulação de bo-binagem com padrão de bobinagem alto. a) domo dianteiro, b) domo traseiro

Figura 13: Vista isométrica frontal da simulação de bobinagem com padrão de bobinagem baixo

Figura 14: Vista isométrica traseira da simulação de bobinagem com padrão de bobinagem baixo

Figura 11: Vista isométrica traseira da simulação de bobinagem com padrão de bobinagem alto

Nas Figuras 13 e 14, são apresentadas as vistas isométricas do envelope-motor, e, na Fig. 15, as vistas frontais dos domos, para a simulação de bobinagem não-geodésica com padrão de bobinagem baixo, conforme parâmetros da Tab. 4.

CONCLUSÕES

Para o caso em que está sendo analisado, um vaso de pressão, para aplicação como envelope-motor S-30, onde se tem aberturas polares diferentes, devido a diferentes interfaces, tubeira e ignitor e o vaso ser relativamente comprido, não pode ser aplicada a bobinagem geodésica isotensoidal e nem a bobinagem planar, a bobinagem helicoidal não-geodésica, onde se varia o ângulo de bobinagem durante a parte cilíndrica é a solução.

As duas soluções de bobinagem não-geodésica analisadas são satisfatórias, sendo que a simulação de bobinagem com um padrão mais alto proporciona um maior ancoramento da fibra e uma maior distribuição de fibra nos domos. Devido ao grau de cobertura ser um pouco maior, proporciona uma otimização da espessura final

Page 64: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 191

Simulação computacional da bobinagem filamentar não-geodésica de vaso de pressão de motor foguete

Figura 15: Vistas frontais dos domos para simulação de bob-inagem com padrão de bobinagem baixo. a) domo dianteiro, b) domo traseiro

a)

b)

do envelope-motor, pois a espessura de cada camada bobinada é mais espessa, proporcionando um menor número de camadas para a composição da espessura final.

REFERÊNCIAS

ASTM D 2243, 1995, “Standard Test Method for Freeze-Thaw Resistance of Water-Borne Coatings”. Philadelphia.

Heitkoetter, R.F., 2009, “Análise de Fabricação e das Proteções Térmicas de um Envelope-Motor S-30 em Compósito” Dissertação de Mestrado - Instituto Tecnológico de Aeronáutica, São José dos Campos, S.P., Brasil, 164 f.

Marinucci, G., “Desenvolvimento, Fabricação e Análise de Falha e Fratura de Cilindros de Fibra de Carbono Colapsados por Pressão Interna”, 2001, Ph. D. Thesis, Instituto de Pesquisas Energéticas e Nucleares, Autarquia associada à Universidade de São Paulo, São Paulo, Brazil, 181 p.

Material S/A. 2002, Cad Wind NG for Windows, Process Simulation System for Filament Winding, User Manual. Belgium, 56 p.

Zickel, J., “Isotensoid Pressure Vessels”, 1962, ARS Journal, No 32, pp. 950-951.

Page 65: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 66: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 193

Alison de Oliveira Moraes*Institute of Aeronautics and Space

São José dos Campos, [email protected]

Waldecir João PerrellaTechnological Institute of Aeronautics

São José dos Campos, [email protected]

*author for correspondence

Performance evaluation of GPS receiver under equatorial scintillationAbstract: Equatorial scintillation is a phenomenon that occurs daily in the equatorial region after the sunset and affects radio signals that propagate through the ionosphere. Depending on the temporal and spatial situation, equatorial scintillation can represent a problem in the availability and precision of the Global Positioning System (GPS). This work is concerned with evaluating the impact of equatorial scintillation on the performance of GPS receivers. First, the morphology and statistical model of equatorial scintillation is briefly presented. A numerical model that generates synthetic scintillation data to simulate the effects of equatorial scintillation is presented. An overview of the main theoretical principles on GPS receivers is presented. The analytical models that describe the effects of scintillation at receiver level are presented and compared with numerical simulations using a radio software receiver and synthetic data. The results achieved by simulation agreed quite well with those predicted by the analytical models. The only exception is for links with extreme levels of scintillation and when weak signals are received.Keywords: Component tracking performance, GPS receiver, Ionospheric scintillation, Communication system simulation.

INTRODUCTION

Several environmental factors may affect GPS (Global Positioning System) performance, such as electromagnetic interference, multipath, atmospheric delay and ionospheric scintillation. Ionospheric Scintillation is responsible for a significant decrease in GPS accuracy, which may lead to a complete system failure (Beach, 1998). Ionospheric scintillations result in rapid variations in phase and amplitude of the radio signal, which crosses the ionosphere. Such a phenomenon is more usual near equatorial regions approximately from -20o to 20o of the globe and auroral zone from 55o to 90o of latitude. Apart from that, the scintillation activity has a temporal dependence, according to the local season and the solar cycle that has an 11-year period (Beach, 1998; Kintner et al., 2004). The ionospheric scintillation phenomenon in equatorial regions is known as equatorial scintillation. This kind of scintillation impacts predominantly by causing fluctuations on the intensity of the signal (Beniguel et al., 2004). Equatorial scintillation usually takes place after sunset, affecting the GPS receiver performance, depending on its severity (Basu, 1981). The objective of this work is to evaluate the effects of equatorial scintillation on the code and carrier tracking loops of GPS receivers. Initially, an introduction to ionospheric behavior in equatorial zones is given;

followed by a statistical model presentation, which characterizes the amplitude scintillation. Following this, a receiver performance analysis is presented, as function of this effect. Based upon such an analysis, analytical models are used to represent the behavior and performance of GPS receivers. Finally, results of numerical simulations are presented and compared to analytical results.

FUNDAMENTALS OF IONOSPHERIC SCINTILLATION

Ionosphere

The ionosphere is the layer of the atmosphere where free electrons and ions are present in sufficient quantities to affect the radio waves traveling through it. The structure of the ionosphere varies due to season, daily variation, solar production and the process of recombination among electrons. The ionosphere is sub-classified in layers D, E, F1 and F2, according to the electrical density present. The D layer extends from 50km to 90km of altitude. It has a low electron density, vanishing during the night. The E layer extends from 90km to 140km in height and is basically produced by the solar X-rays, having a peak of electron density at 120km. The F1 layer extends from 140km to 200km and the main source of ionization is the solar extreme ultraviolet light (EUV). The F2 layer

Received: 06/08/09 Accepted: 29/09/09

Page 67: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009194

Moraes, A.O., Perrella, W.J.

Figure 1. Electrical density profile of ionosphere during day and night.

Figure 2. Equatorial fountain that gives rise to the equatorial anomaly.

extends from 200km to 1000km and presents a region of maximum electron density at 350km (Kelley, 1989). These layers are a result of photochemical processes and plasma transportation. A typical electron density profile of the ionospheric layers during the daytime and the ionization during the evening hours is shown in Fig. 1.

At dusk, the electric field, E, increases as the neutral winds, V, increases, and the ExB drift raises the F layer. This process is known as Pre-Reversal Enhancement when the base of the F layer is forced against the gravity, creating the Rayleigh-Taylor Instability, in which a heavier fluid is held by a lighter one. When a perturbation happens, this equilibrium is broken and the lighter fluid rises through the denser fluid creating a bubble. In the region of the equatorial ionosphere these bubbles are called Equatorial Plasma Bubbles. The Equatorial Anomaly is responsible for the formation of the plasma density irregularities that result in the Equatorial Plasma Bubbles that lead to scintillation (Kintner et al., 2004).

STATISTICAL CHARACTERISTIC OF AMPLITUDE SCINTILLATION

The index that indicates the severity of amplitude scintillation is the S4. It is defined as the normalized variance of intensity of the received signal, given by

(1)

where I=AS2 is the intensity of the received signal. Based

on studies of ionospheric scintillation data, it has been shown that amplitude scintillation might be modeled as a stochastic process that follows a Nakagami-m distribution, given by (Fremouw, Livingston and Miller, 1980)

(2)

where AS is the amplitude of the received signal, Γ() is the gamma function, and . The m parameter from Nakagami-m distribution and the S4 index are related by m=1/S4

2.

It is important to observe how the layers decay at night in the absence of photo-ionization. The F1 layer almost disappears while F2 and E layers still remain due to recombination and transportation.

Equatorial Scintillation

Equatorial scintillation happens when small-scale irregularities in the F region of the ionosphere affect the RF signals. Influenced by the pressure gradients and gravity, the equatorial plasma present in the F2 layer is forced downward along the magnetic field lines. This process creates a belt of enhanced electron density from 15o to 20o on both sides of the geomagnetic equator. This particular region where the electron density in enhanced is referred to as the Equatorial Anomaly. The process by which they are created is known as the Fountain Effect. This process is illustrated in Fig. 2, where the Equatorial Fountain is indicated by the ExB drift, which drives the plasma upward. This plasma is then diffused along B field under the influence of gravity, g, and pressure gradients ∇P . This process happens in the daylight hemisphere (Kelley, 1989).

Page 68: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 195

Performance evaluation of GPS receiver under equatorial scintillation

SYNTHETIC AMPLITUDE SCINTILLATION DATA

Based on Humphreys et al. (2008), a scintillation model has been implemented with the objective of simulating the scintillation effects on equatorial transionospheric radio signals such as the GPS ones. This model assumes that amplitude scintillation follows a Rice distribution. This assumption has been made because of the implementation simplicity of the Rice model. The Rice distribution is given by

p AS( ) = 2AS 1+ K( )Ω

I0 2ASK + K 2

Ω

⎝⎜

⎠⎟ e

− K − AS2 1+ K( ) Ω (3)

where, AS ≥ 0 , I0 ( ) is the modified Bessel function of zeroth order and K is Rician parameter. Although the Nakamami-m distribution, (2), is the one that best fits the real scintillation data (Fremouw, Livingston and Miller, 1980). But in Humphreys et al. (2008), it is shown that Nakamami-m and Rice distribution are similar and they agree quite well with the empirical data for S4 < 1 , according to the chi-square tests. Thus, the Nakamagi-m distribution might be closely approximated by Rice distribution. This mapping is given by (Simon and Alouini, 2006),

K = m2 − m m − m2 − m (4)

Considering the expression

z t( ) = zK + ξ t( ) (5)

where zK is a complex constant and ξ t( ) is a zero mean, gaussian process. From (5), the amplitude scintillation with a Rice distribution can be generated by

AS t( ) = z t( ) (6)

The block diagram showing the mechanization process of scintillation simulator is illustrated in Fig. 3.

According to this process, a zero-mean white Gaussian noise n(t) is applied to a 2nd order Butterworth low pass filter, with the response:

H ( f ) = 1 1+ fBd

⎛⎝⎜

⎞⎠⎟

4

(7)

where Bd = β 2πτ 0 is the frequency bandwidth, β=1.2396464 is a constant and τ 0 is the decorrelation time of generated scintillation data. The filtered version of n t( ) is denoted

ξ(t) , with variance

σξ

2 . The value of

zK in (5) is responsible for controlling the level of scintillation in the simulator.

zK = 2σξ

2K (8)

Based on (4) and the relation m=1/S42 it is possible to

establish a relation between the scintillation index S4 and the Rician parameter K, that is used to compute zK .

The constant

zK is added to

ξ(t) , resulting in

z t( ) = ξ t( ) + zK . The signal

z t( ) is then normalized by

α = E z t( )⎡⎣ ⎤⎦ , resulting in a synthetic scintillation data. An example of amplitude scintillation data generated by this model is shown in Fig. 4.

Figure 3. Mechanization process of scintillation simulator.

Figure 4. Example of scintillation for S4=0.68.

GPS RECEIVER

The GPS receiver is divided into three main parts, according to Fig. 5. The Front End (FE) is the set of devices where the electromagnetic waves from the GPS satellites are converted into electrical signal by the antenna. Through the front end, the input signal is filtered, amplified to a proper amplitude and converted to an Intermediate Frequency (IF) to be processed. The second section of the receiver is referred to as a Digital Signal Processing (DSP). This section performs the acquisition

Page 69: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009196

Moraes, A.O., Perrella, W.J.

Figure 5. Architecture of GPS receiver (Ward, 1996).

and tracking of the received signal, providing the pseudo range and carrier phase information. The last section of the receiver is the Navigation Data Processing (NDP), this section has the task of calculating ephemeris data, GPS time, position and velocity (Ward, 1996).

Scintillations affect the performance of GPS receivers, in a notable way at the tracking loop level. Depending on the scintillation level, the receiver might increase the range measurement errors or can even lead to carrier loss of lock and code loops. In extreme cases, the scintillation can result in full disruption of the receiver.

The signal received from a generic GPS satellite, at the output of the FE that will be processed by the DSP section of the receiver is given as (Borre et al., 2007):

s (n) = A(n)C (n − τ )D (n − τ )cos(2π fFI t +ϕ ) + N(n) (9)

where A(n) is the amplitude of the received signal, C(n) is the satellite pseudorandom noise code (PRN), D(n) is the navigation data sequence, τ is the code delay, fFI is the intermediate frequency, ϕ is the phase of the GPS carrier. In addition to the received signal, there is the term N(n) that represents the band-limited, stationary, Gaussian noise, with zero mean and power spectrum density.

Carrier Tracking Loop

In order to demodulate the GPS navigation data successfully, it is necessary to generate an exact carrier wave replica. This task is usually executed in a GPS receiver by a phase locked loop (PLL). The model of PLL used on GPS receivers is based on a Costas suppressed carrier tracking loop, as illustrated in Fig. 6 (Ward, 1996).

In each arm of the Costas loop, the IF signal undergoes two multiplications. The first one has the objective of wiping off

Figure 6. Block diagram of Costas carrier tracking loop.

the carrier of the received signal. The Costas tracking loop is insensitive to 180o phase shifts, which requires a second multiplication that wipes off the PRN code. The PRN code is generated by the code tracking loop. After the multiplications the signal in both arms is filtered by a pre-detection integrate and dump filter. These signals are then used by the phase discriminator to determine the carrier phase error between the local generated replica and the received signal. The phase error computed by the discriminator is filtered and applied as a feedback to the NCO.

An important parameter in the evaluation of receiver performance is the tracking threshold point, which is the value where the loop stops working stably and loses the lock. When it happens, the phase error measurements become meaningless and the number of cycle slips increases. The exact point of this transition is hard to determine. Reasonable values to be assumed as a threshold, and mean time between cycle slips are given respectively by Holmes (1982):

σϕε lim2 = π 12( )2 rad 2⎡⎣ ⎤⎦

(10)

T = π 2ρε I02 ρε( ) 2Bn (11)

where ρε = 1 4σϕε2 is the signal-to-noise ratio of the loop.

Code Tracking LoopUnits

The objective of the code tracking loop is analogous to the carrier tracking loop. In this case, the code loop has the function of providing a PRN code sequence replica with the same code delay from the received signal. With this estimation of the code delay it is possible to obtain a pseudorange measurement. The code tracking loop in a GPS receiver is a delay locked loop (DLL), Fig. 7 illustrates a block diagram of the DLL.

Page 70: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 197

Performance evaluation of GPS receiver under equatorial scintillation

When there is no scintillation, the component of tracking error due to thermal noise in a PLL is given by,

σϕTh2 =

BnC N0

1+1

2Δt C N0( )⎡

⎣⎢

⎦⎥ (14)

where Bn is the PLL single-sided noise equivalent bandwidth, C/N0 is the nominal carrier to noise density ratio and Δt is the pre-detection integration period.

The equatorial scintillation presents indicative fluctuation in the intensity of the received signal. This amplitude scintillation degrades the C/N0 and, as a consequence, increases the tracking error due to thermal noise. In (Conker et al., 2003), the effects of amplitude scintillation have been modeled, considering that this kind of scintillation follows a Nakagami-m distribution. Thus the thermal noise tracking error can be characterized by S4, according to the expression

σϕTh2 =

Bn 1+ 12Δt C N0( ) 1− 2S42( )

⎝⎜

⎠⎟

C N0( ) 1− S42( ) (15)

The term σϕTh2 in (13) is the one that most contributes to

the PLL tracking error variance. Indeed, the other terms in (12), compared with σϕTh

2 become negligible.

For the tracking error code delay at the output of DLL it is correct to affirm that στε

2 = στ Th2 , where στ Th

2 is the thermal noise component. In this case, there is no phase scintillation component, because according to Davies, 1990, a non-coherent DLL is not affected by the phase scintillation. In the absence of scintillation, the DLL tracking error code loop in function of the thermal noise is given by

στ Th2 =

Bnd2C N0

1+1

Δt C N0

⎣⎢

⎦⎥ (16)

In an analogous way to the PLL case, in (Conker et al., 2003), the effects of scintillation on the DLL is modeled using a Nakagami-m distribution to characterize tracking code error by function of amplitude scintillation index, S4, that is expressed as

στ Th2 =

Bnd 1+ 1Δt C N0( ) 1− 2S42( )

⎝⎜

⎠⎟

2 C N0( ) 1− S42( ) (17)

Figure 7. Block diagram of code tracking loop - DLL

First, the IF received signal is multiplied by a carrier wave replica, resulting in a base band signal. The code generator, provides the code replica and two other shifted versions of the code replica. Those three code replicas are called early, prompt and late. The early and late replicas are shifted from the original prompt replica by a factor d. The base band signal in I and Q arms are multiplied by the code replicas and filtered by a pre-detection integrate and dump filter. The I and Q filtered signals are then processed by the code loop discriminator to produce a code delay error that will be filtered to feedback the code generator.

As in the carrier tracking loop case, there is a tracking threshold that is used to evaluate the performance of the code loop. In this case, the DLL is considered to be in lock if the threshold is respected (Ward, 1996):

στε lim2 ≤ d 3( )2 (12)

EFFECTS OF AMPLITUDE SCINTILLATION ON GPS RECEIVERS

According to [12], the tracking error variance at the output of the PLL is expressed as

σϕ e2 = σϕ S

2 + σϕTh2 +σϕ osc

2 (13)

where σϕ S2 corresponds to phase scintillation error

component, σϕTh2 is the thermal noise component and

σϕ osc2 is the receiver oscillator noise. The receiver

oscillator noise is assumed to have a standard deviation of 0.1 rad and it will be ignored in this work. For the equatorial region, the values of σϕ S

2 are considerably low and well behaved.

Page 71: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009198

Moraes, A.O., Perrella, W.J.

Usually the results of DLL tracking error are presented in meters. In this case, στε meters( ) =WC Aστε , where WC A is the chip length (293.0523m).

Numerical Results

This section describes the results obtained from the simulations conducted in this investigation, as depicted in the block diagram shown in Fig. 8 (Moraes, 2009).

It is observed that the results of simulation and analytical models do not diverge significantly until the level of the received signal is weak and the tracking error is above the tracking threshold (10) and (12) for the carrier and code loop. The next step in the simulation consists of affecting the GPS signal with amplitude scintillation. In this case, the amplitude scintillation was simulated from a low level

Figure 8. General block diagram of the simulation.

The model described in section III was implemented to generate synthetic amplitude scintillation data. With this model, it is possible to specify the scintillation severity by the S4 index, generating an ionospheric channel response.

A GPS signal as expressed in (9) was generated. The navigation data bits D(n) were generated as a known sequence. This sequence was modulated by the PRN code and by carrier wave. This carrier wave was set to a fFI frequency of 9.548MHz, with a sample frequency of 38.192MHz.

The amplitude of GPS generated signal varies with the synthetic amplitude scintillation data. Later a Gaussian noise was added to the GPS signal with amplitude scintillations. The level of noise is adjusted according to the desired C/N0 of the simulated link.

This signal corrupted by the noise and affected by the amplitude scintillation is processed by the DSP section of the receiver. A software receiver based on (Borre et al., 2007) was modified in order to evaluate the effects of amplitude scintillation on carrier and tracking loops.

The tracking error output of this software receiver was compared with analytical models. In all simulations the GPS data lasted 30 seconds, the pre-detection integration period lasted 1ms and the correlation space was equal to 0.5 chip.

Figure 9. Calibration results for the Carrier tracking loop.

Figure 10. Calibrations results for the code tracking loop - DLL

Figures 9 and 10 shows calibration test results for three different values of bandwidth for the carrier and code loops respectively. This test consists of simulating a situation where there is no scintillation. In this case, the received GPS signal is only corrupted by noise. The results of this simulation are compared with the analytical values based on (15) and (17).

Page 72: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 199

Performance evaluation of GPS receiver under equatorial scintillation

up to 0.7, that is considered a severe level of scintillation. For this simulation a GPS link was considered with C/N0=40dB and C/N0=32dB and Bn=20Hz. The results, shown in Fig. 11 for the carrier loop, appear to suggest that the analytical model fails to perform the scintillations effects in carrier loop for cases with weak signals. Thus GPS links with high C/N0 are little affected by scintillation.

CONCLUSION

This work has presented a performance evaluation of a GPS receiver under equatorial scintillation. Analyzing the analytical models, it is possible to conclude that amplitude scintillation is a high source of error, especially in the carrier tracking loop. Using a synthetic amplitude scintillation model, it has been possible to simulate the amplitude scintillation in a software receiver. The results from numerical simulations showed that, except for the cases involving extreme scintillation, or radio link with low carrier to noise density ratio, C/N0, the numerical results agreed quite well with those predicted by the analytical models. In situations, involving weak signals and high scintillations, the analytical models fail to predict the real performance of the receiver (Moraes, 2009).

REFERENCES

Basu, S., 1981, “Equatorial Scintillations – A Review” Journal of Atmospheric and Terrestrial Physics, Vol. 43, Nº. 5/6, pp. 473-489.

Beach, T. L., 1998, “Global Positioning System Studies of Equatorial Scintillations”, Ph.D. Thesis, Cornell University, 335p.

Beniguel, Y., Forte, B., Radicella, S. M., Strangeways, H. J., Gherm, V. E., Zernov, N. N., 2004, “Scintillations Effects on Satellite to Earth Links for Telecommunication and Navigation Purposes”, Annals of Geophysics, Vol. 47, pp. 1179-99.

Borre, K., Akos, D. M., Bertelsen, N., Rinder, P., Jensen S. H., 2007, “A Software-Defined GPS and Galileo Receiver”, Birkhäuser, Boston, 176p.

Conker; R. S., El-Arini, M. B. Hegarty, C. J., Hsiao, T., 2003, “Modeling the Effects of Ionospheric Scintillation on GPS/Satellite-Based Augmentation System Availability”, Radio Science, Vol. 38.

Davies, K., 1990, “Ionospheric Radio,” IEE Electromagnetic Waves Series, Vol. 31.

Figure 13. Code tracking loop under amplitude scintillation.

Figure 12. Costas carrier tracking loop performance for S4=0.63.

Figure 11. Carrier tracking loop performance under amplitude scintillation.

On the other hand, links with some limitation in C/N0 present a carrier tracking error higher than expected.

To confirm this limitation of the receiver, a situation was considered where a GPS signal is affected by an amplitude scintillation of S4=0.63. These data were processed by the receiver, changing only the power of the received signals. Figure 12 presents the result of this simulation for the carrier loop. These results show that with low C/N0 and severe scintillation, the analytical model does not describe the real tracking error.

Simulations show that thermal noise error associated with the amplitude scintillation is small for code loop. Even for extreme scintillation levels and weak signal, this loop presents robustness. Figure13 shows the performance of the code loop under several levels of scintillation in a case where C/N0=36dB and Bn=1 Hz.

Page 73: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009200

Moraes, A.O., Perrella, W.J.

Fremouw, E. J., Livingston; R. C., Miller, D. A., 1980, “On the Statistics of Scintillating Signals”, Journal of Atmospheric and Terrestrial Physics, Vol. 42, pp. 717–731.

Holmes, J. K., 1982, “Coherent Spread Spectrum Systems”, John Wiley & Sons, New Jersey, 624p.

Humphreys, T. E., Psiaki, M. L., Hinks, J. C. Kintner Jr., P. M., 2008, “Simulating Ionosphere-Induced Scintillation for Testing GPS Receiver Phase Tracking Loops”, IEEE Transactions on Aerospace and Electronic Systems.

Kelley, M. C., 1989, “The Earth’s Ionosphere: Plasma Physics and Electrodynamics”, San Diego, Academic Press, 484 p.

Kintner Jr., P. M., Ledvina, B. M., De Paula, E. R., Kantor, D I. J., 2004, “Size, Shape, Orientation, Speed, and Duration of GPS Equatorial Anomaly Scintillations”, Radio Science, Vol. 39, 2012-2017pp.

Moraes, A. O., 2009, “Análise do desempenho de um receptor GPS em canais com cintilação ionosférica”, Thesis (Master Degree in Telecomunication – Technological Institute of Aeronautics, São José dos Campos, SP, Brazil, 106p.

Simon; M. K., Alouini, M., 2006, “Digital Communications Over Fading Channels,” 2.ed., Wiley, New York.

Ward, P., 1996, “Understanding GPS: Principles and Applications”, Artech House, Boston, pp.119-208.

Page 74: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 201

Alberto W. S. Mello Jr*Institute of Aeronautics and Space

São José dos Campos - [email protected]

Daniel Ferreira V. MattosInstitute of Aeronautics and Space

São José dos Campos - [email protected]

* author for correspondence

Reliability prediction for structures under cyclic loads and recurring inspectionsAbstract: This work presents a methodology for determining the reliability of fracture control plans for structures subjected to cyclic loads. It considers the variability of the parameters involved in the problem, such as initial flaw and crack growth curve. The probability of detection (POD) curve of the field non-destructive inspection method and the condition/environment are used as important factors for structural confidence. According to classical damage tolerance analysis (DTA), inspection intervals are based on detectable crack size and crack growth rate. However, all variables have uncertainties, which makes the final result totally stochastic. The material properties, flight loads, engineering tools and even the reliability of inspection methods are subject to uncertainties which can affect significantly the final maintenance schedule. The present methodology incorporates all the uncertainties in a simulation process, such as Monte Carlo, and establishes a relationship between the reliability of the overall maintenance program and the proposed inspection interval, forming a “cascade” chart. Due to the scatter, it also defines the confidence level of the “acceptable” risk. As an example, the damage tolerance analysis (DTA) results are presented for the upper cockpit longeron splice bolt of the BAF upgraded F-5EM. In this case, two possibilities of inspection intervals were found: one that can be characterized as remote risk, with a probability of failure (integrity nonsuccess) of 1 in 10 million, per flight hour; and other as extremely improbable, with a probability of nonsuccess of 1 in 1 billion, per flight hour, according to aviation standards. These two results are compared with the classical military airplane damage tolerance requirements.Keywords: Reliability, Structure integrity, Fatigue, Damage tolerance.

LIST OF SYMBOLS AND ABBREVIATION

a Crack sizeα Parameter of the POD curvea0 Crack length for zero-detection probabilityCOV Coefficient of VariationCDF Cumulative distribution functionDTA Damage tolerance analysisFCL Fatigue critical locationl Parameter of the POD curveNDI Non-destructive inspectionPOD, Pd Probability of DetectionPDF Probability distribution functions, s(t) Standard deviation, Standard deviation as

function of time

INTRODUCTION

Structures such as airplanes, bridges, ships, etc, are subjected to cyclical loads that can lead any initial crack

to a catastrophic failure. Ideally, any fracture control plan should be based upon the acceptable probability of failure.

The crack propagation rate, the field inspection, and the quality of the material are subject to uncertainties that make a deterministic reliability study for the case difficult (Provan, 2006). Many of the parameters and variables used in fracture control have a scatter factor that must be accounted for in life prediction. All material properties have variability. In most cases, the structural loads are statistical variables. Crack detection capability is also governed by statistics. There is a non-zero probability that a crack will be missed, in spite of the sophisticated inspection method to be used. For this reason, in a crack growth curve, a scatter factor has always to be considered to determine the inspection intervals. The scatter factor depends on the accuracy of the data used as well as the specification that must be satisfied.

Primary components are inspected upon manufacture and undergo an extremely strict quality control system. For each component, it can be assured that if a flaw exists it is smaller than a guaranteed size - ag. This guaranteed

Received: 18/09/09 Accepted: 28/10/09

Page 75: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009202

Mello Jr, A.W.S., Mattos, D.F.V.

Every time the structure is inspected there is a probability of missing the crack, regardless of its size. Naturally, the inspection may be performed several times during the structure service life, which will increase its probability of detection.

On the other hand, the size of the crack at a certain service life time depends not only upon the initial flaw size but also the crack growth rate. There is uncertainty about how fast the crack grows, which can be visualized in Figure 3.

Figure 1: CDF for initial crack size.

Figure 2: Crack Probability of Detection Curves.

Figure 3: Scatter in Crack Growth Curve.

maximum flaw depends upon the type of inspection (Broek, 1989; Knorr, 1974; Lewis, 1978).

On the other hand, each component is assumed to have a flaw of at least a1, which represents the minimum intergranular “defect” and/or machining surface imperfection present in the material (Gallagher, 1984).

The initial flaw size can be considered as a uniform distribution between a1 and ag (Knorr, 1974), as depicted in Figure 1.

From the moment of manufacture, the structure has to be inspected by a specified NDI (non-destructive inspection) method. The probability of detection for each method depends upon the crack size and the accessibility of the inspected location, as show in Figure 2.

The point where the curves cross the horizontal axis is the zero probability of detection and is dictated by the resolution of the NDI equipment under that specific condition (Lewis, 1978).

The crack size for each operation time and for a prescribed initial crack follows a normal distribution with the average of a predicted crack growth curve with a given coefficient of variation (Broek, 1989). Figure 3 shows typical crack growth curves with a possible scatter in the crack growth rate.

In order to obtain the reliability of a structure submitted to cyclic loads, all these uncertainties must be quantified and accounted for. The following Section will describe each of the uncertainties involved in the analysis and how they can be anticipated.

UNCERTAINTIES

Initial Crack Size

Each structure is made of components that had to be machined and assembled to form the whole part. The machining process as well as the assembly can introduce small damage to the components that can lead to propagating cracks (Knorr, 1974). Also, even for very well controlled processes, there is an intrinsic “crack”, which can be defined as imperfections in the grain boundary of the metal (ASM Handbook, 1992). Figure 4 illustrates how this imperfection may occur in the grain boundary level.

As a consensus, it has been usual to consider a value of 0.127 mm (0.005”) as the minimum flaw in the structure.

Page 76: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 203

Reliability prediction for structures under cyclic loads and recurring inspections

However, any number can be specified, according to the requirements. This is the a1 parameter exemplified in Figure 1.

in Figure 6. This Figure depicts how this scatter can be understood, by showing a normal distributed crack growth rate with a central value, which is the average curve predicted by fracture mechanics, and its standard deviation. A coefficient of variation between 10 and 20 per cent normally covers all the uncertainties related to the crack growth rate (Broek, 1989).

Figure 4: Initial flaw formed in the grain boundary

On the other hand, before being assembled to form the structure, all components are inspected by the manufacturer, by means of a suitable non-destructive inspection method. So that, for each component, or complete structure, there is a guarantee that if any imperfection exists it is smaller than in-Lab detection size. Normally, this guaranteed value is in the order of 1.27 mm (0.05”). This is the ag parameter illustrated in Figure 1 (Knorr, 1974).

Crack Growth Curve

All material properties, including toughness, show variability. According to MIL-A-8866 (USAF, 1974), in most cases the structural loads are statistical variables. The pressure vessel may be well controlled, but random fluctuations may occur. Loads on bridges vary widely depending upon traffic; they can be estimated but cannot be known until after the fact. Despite the state-of-the-art in measuring loads, fatigue life prediction is based on the assumption that previous measured loads will be repeated in the future. In addition, there are errors due to shortcomings and limitations of the analysis, due to the limited accuracy of loads and stress history, and due to simplifying assumptions. For the effects of all these assumptions, it is preferable to use best estimates and average data and to apply the variability on the final crack growth curve.

So that, with the best information from the load history and material properties, by using the fracture mechanics approach, an average crack growth curve can be obtained, as depicted in Figure 5.

To summarize all the uncertainties of loads and material properties in the crack growth curve, a scatter can be applied, with the mean value on the predicted curve and with a given distribution from that value, as shown

Figure 5: Crack growth curve from the best known opera-tional and material data.

Figure 6: Scatter applied on average crack growth curve.

NDI Probability of Detection

As already discussed, crack detection is governed by statistics. There is a non-zero probability that a crack will be missed, despite the sophisticated inspection methodology.

This work focuses on the available data from the following NDI techniques: Eddy current, ultrasound, dye penetrant, x-ray and visual. It is not the scope of this work to discuss how the inspections are performed. None of the inspection processes will be discussed. If in the future an NDI technique is improved, the parameters presented in this work can be supplemented and the overall reliability determination tool will still be valid.

As shown in Figure 2, there is a certain crack size below which detection is physically impossible. For example, for visual inspection this would be determined by the

Page 77: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009204

Mello Jr, A.W.S., Mattos, D.F.V.

a0 is function of the inspection method and the accessibility of the area to be inspected (Table 2).

Table 1: l/a0 for different inspection methods (Knorr, 1974; Lewis, 1979)

Method l/ a0

Ultrasonic 3.00

Dye Penetrant 2.17

Eddy-current 2.23

X-ray 2.50

Visual 2.00

Table 2. a0, in millimeters, for various inspection methods and accessibilities (Knorr, 1974; Lewis, 1979).

Accessibility Ultrasonic Penetrant X-Ray Visual

Excellent 0.508 0.762 1.524 2.54

Good 1.016 1.524 3.048 5.08

Fair 2.032 3.048 6.096 10.16

Not easy 3.048 4.572 9.144 15.24

Difficult 4.064 6.096 12.19 20.32

METHODOLOGY

The proposed solution for the problem involves Monte Carlo simulation (Manuel, 2002). The process consists of generating random numbers for ai and crack growth curve rate, change the inspection interval and compute the probability of detection due to recurring inspections during the structure service life.

As a refinement of the method, the Latin Hypercube procedure was also proposed (Manuel, 2002), where the simulation domain is divided into subdomains to better distribute the random numbers.

All variables in the problem are considered uncorrelated. The procedure for getting ai and the variability of the crack growth curve is summarized in Figure 7. In this Figure, the distribution of initial flaw is considered to be uniform and the crack growth rate is Gaussian. For every cycle of iteration, the initial flaw size is randomly picked between a1 and ag, and the effective crack size is computed on the curve g(t). Being f(t) the average crack size as function of the variable t (cycles, time, flights etc.), g(t) is given by g(t) = f(t) + k*s(t). Where k* is the number of standard deviations obtained in the N(0,1) curve by random generation. s(t) is the standard deviation expected for that crack size.

resolution of the eye, for ultrasonic inspection by the wave length, and so on. In the opposite direction, even for very large cracks, the probability of detection is never equal to 100 per cent, because any crack may be missed. Several field data on the reliability of non-destructive inspection have shown that the probability curves have the general form shown in Figure 2, which can be described by the equation (Broek, 1989):

p = 1- e -{(a-a0)/(λ-a0)}α (1)

where a0 is the crack size for which detection is absolutely impossible (zero probability of detection), α and l are parameters determining the shape of the curve. It is important to distinguish between the detectable crack size and the constant a0 that appears in the equation. The detectable crack size ad is a general term whereas a0 represents a parameter in Equation 1. This equation gives the probability, p, that a crack of size a will be detected in one inspection by one inspector. The probability of non-detection is 1- p. A crack is subjected to inspection several times before it reaches the permissible size. At each inspection there is a chance that it will be missed. At successive inspections, the crack will be longer, and the probability of detection is higher, but there is still a chance that it may go undetected. The probability of detection is then:

p = 1 - ∏ (1 - pi )n

i=1 (2)

where pi is the probability of detection for each crack size, that follows a curve such as Figure 2, and n is the number of inspections.

The parameters a0, α and l were obtained from Knorr (1974) and Lewis (1978) and they are summarized here:

1 - For Eddy Current inspection method:

a0 = 0.889 mm (0.035”)

l = 1.98 mm (0.078”)

α = 1.78

2 - For all other inspection methods:

α = 0.5

l/ a0 is function of the inspection method only (Table 1)

Page 78: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 205

Reliability prediction for structures under cyclic loads and recurring inspections

By systematic variation in the inspection interval - H, the crack growth curve and the crack POD are updated to determine the cumulative probability of detection for the predicted structure life time. Figure 8 sketches one step of the process.

reliability. For example, if 99.9 per cent reliability is desired with a 95 per cent confidence level, it is necessary that 95 per cent of the simulated cases for that particular reliability be at the right for the assumed inspection interval. Figure 10 depicts how the confidence level is considered. In this case, the inspection interval H1 has a 99.9 per cent probability of going through its service life intact, with 95 per cent confidence level.

Figure 7: Process of obtaining the crack size during each inspection.

Figure 8: One step of the overall process.

Repeating the procedure several times, a probability distribution region is expected, as shown in Figure 9. Therefore, the reliability of the overall structure remaining safe during its operational life can be predicted. Because for a given inspection interval there will always be scatter during the simulation, a cascade like chart is expected (Fig. 9). Hence, it is possible to establish a relationship between the probability of the structure being safe, given an inspection interval, with a level of confidence.

The current work proposal is to determine the confidence level by counting the number of points around the expected

Figure 9: Overall reliability as function of Inspection Interval.

The next Section describes the implementation of the proposed methodology in determining the reliability of structures subjected to cyclic loads under a fracture control maintenance plan.

Application Procedures

A dedicated computer program was developed to provide an automated engineering tool for recurring inspection

Figure 10: Inspection interval H1, given 99.9 per cent reliabil-ity, with 95 per cent confidence level.

Page 79: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009206

Mello Jr, A.W.S., Mattos, D.F.V.

reliability analysis. This program incorporates all the processes described here. The software main screen is as depicted in Figure 11.

inspection interval and how to randomize the variables. The minimum reliability is a saving time parameter that investigates probabilities of success above a given level. The random generation procedure may be divided in up to 100 partitions, for use of Latin Hypercube refinement. The numbers are then randomized within each partition.

Figure 11: Main screen of the NDI Reliability Program

The first step in the analysis is to load the crack growth curve. The data file must be in tabulated text format. The first column must contain the time (hours, cycles, flights etc.) and the second column the crack size.

When crack growth data is uploaded, the program opens another screen with the fitted curve, as shown in Figure 12. The next steps are to define which type of NDI will be performed in the field and the accessibility location, the boundaries for the initial crack and the type of distribution assumed for each of the uncertainties. This version of the program allows uniform distribution for ai and uniform or normal (Gaussian) distribution for the crack growth curve. Figure 13 shows the NDI setup screen.

Figure 12: Crack Growth Curve [8].

The options in the advanced menu (Fig. 14) are the minimum reliability level to be investigated, the maximum inspection interval to be considered, the increment in the

Figure 13: Program NDI setup screen.

The Analysis Menu option opens another screen allowing simulation and definition of the NDI interval, based on the desired reliability.

For each type of inspection and/or access there is a better minimum probability to be chosen in the setup screen. For instance, for “Eddy Current” method, the POD curve approaches the curve for very small cracks. It means that despite the crack growth curve, one inspection in the life time will give a very high probability of detection. For

Figure 14: Advanced Options setup.

Page 80: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 207

Reliability prediction for structures under cyclic loads and recurring inspections

this case, the best result will be obtained by setting the minimum reliability to 0.99 or above.

The next Section discusses the results obtained for one of the fatigue critical locations from the damage tolerance analysis of the Brazilian Air Force upgraded F-5EM, performed according to the MIL-STD-1520C (USAF, 2005).

RESULTS

One F-5EM FCL, as presented by Mattos (2009), is the splice bolt in the upper cockpit longeron at Fuselage Station 284, as illustrated in Figure 15.

This is based on an initial flaw size of 2.74 mm (0.108”), with a scatter factor of two.

This component will be inspected by dye penetrant technique. As the component is removed from the splice area and taken to a laboratory, accessibility is classified as excellent. The minimum flaw will be assumed to be 0.127 mm (0.005”) and the guaranteed value of maximum crack size as new is 1.27 mm (0.05”). The coefficient of variation (C.O.V) of the crack growth curve is given to be 10 per cent and it is normally distributed.

With all these parameters and starting the investigation at 99.9 per cent, the reliability chart is as shown in Figure 17. According to the MPH-830 (IFI, 2005), the characterization for improbable and extremely improbable is one in ten million and one in one billion per flight hour, respectively.

Figure 15: Upper longeron splice bolt F-5EM FCL @ FS 284 (Mello Jr., 2009).

This FCL has a crack curve as shown in Figure 16. This result is for the structure submitted to fatigue load spectrum of the Canoas Air Force Base fleet (Mello Jr., 2009).

Figure 16: Crack growth curve for the F-5EM upper cockpit longeron splice bolt (Data from Mello Jr., 2009).

According to the “Airplane damage tolerance requirements” (USAF, 1974) the suggested recurring inspection interval for this component is 653 flight hours.

Figure 17: Suggested flight hour interval for a 0.01 per cent risk in the structure life time, dye penetrant inspection.

The result presented in Figure 17 shows a suggested interval of 579 flight hours for a 0.01 per cent risk in the structure life time. The procedures adopted in this work recommend dividing the risk by the suggested inspection interval to obtain the estimated risk per flight hour. Therefore, for the determined recurring inspection time, the risk is 0.0001/579 = 1.73 10-7 per flight hour. This risk falls within the improbable failure risk, as described in the MPH-830.

In order to investigate the extremely improbable risk, it is necessary to refine the analysis. In this case, the starting point must be 99.999 per cent. Figure 18 shows the reliability chart for this case.

The suggested inspection interval for a risk of 0.00001 per cent in a life time is 295 flight hours. The risk per flight hour may be estimated as 0.0000001/295 = 3.4 10-10, which falls within the extremely improbable failure risk due to a non-detected crack.

Page 81: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009208

Mello Jr, A.W.S., Mattos, D.F.V.

One suggestion that arises is the possibility of changing the NDI method. Figure 19 shows the result for an analysis aimed at the one in one billion probability of non success, but considering that the item would be inspected by eddy current. The result for that is the recommended inspection interval of 544 flight hours, for the extremely improbable risk.

structure subjected to dynamic loads. An overview was presented of the parameters involved in the fracture control procedures, and a solution, using an automated code that could incorporate the uncertainties to determine the reliability of the structure with a confidence level, was proposed. Also, a description was given of how to determine each of the variables in the problem, considering variations for the NDI methods commonly used in the field, for recurring inspections.

The methodology used considers Monte Carlo simulation with a refinement for Latin Hypercube technique. The reliability curve is obtained by generation of random number for several inspection intervals. The chart reliability vs. inspection interval can be mapped and the safety probability can be obtained with a confidence level.

For the given examples, a structure, which is submitted to dye penetrant inspection, with excellent accessibility, must be inspected every 579 flight hours for an improbable risk of failure, at 95 per cent confidence level. To categorize the risk as extremely improbable, recurring inspections must be every 295 flight hours. Following the standards for military airplane damage tolerance analysis, the recommended inspection interval would be 653 flight hours. One alternative for increasing the recurring inspection time, while keeping the risk very low, is to improve the NDI method. One example shows that by alternating the inspection from dye penetrant to eddy current, the extremely improbable risk interval would increase from 295 to 544 flight hours.

REFERENCES

ASM Handbook, 1992, “Failure analysis and prevention”, 9. ed., Materials Park, OH, (ASM International, vol. 11), pp.15-46.

Broek, D., 1989, “The practical use of fracture mechanics”. Galena, OH. Kluwer Academic, pp. 361-390.

Gallagher J. P., 1984, “USAF damage tolerant design handbook: Guidelines for the analysis and design of damage tolerant aircraft structures”, Dayton Research Institute, Dayton, OH, pp. 1.2.5-1.2.13.

IFI, 2005, “Análise e gerencialmento de riscos nos vôos de certificação”, MPH-830, Instituto de Fomento à Indústria. Divisão de Certificação de Aviação Civil, São José dos Campos, S.P., Brasil.

Knorr, E., 1974, “Reliability of the detection of flaws and of the determination of flaw size”, AGARDograph, Quebec, Nº. 176, pp. 398-412.

Figure 18: Suggested flight hour interval for a 0.0001 per cent risk in the structure life time, dye penetrant inspection.

Figure 19: Suggested flight hour interval for a 0.0001 per cent risk in the structure life time, eddy current inspection.

By way of observation, it is important to emphasize that many of the parameters and variables playing a role in fracture control sometimes vary beyond the expected values. The sole objective of this work is to provide an aid to fracture control measures so that cracks can be eliminated before they become dangerous, by either repair or replacement of the component. All the assumptions are hypotheses to allow predictions based on the best available tools.

CONCLUSION

This work presents a methodology to examine structural reliability when establishing the maintenance plan for a

Page 82: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 209

Reliability prediction for structures under cyclic loads and recurring inspections

Lewis W. H. et al., 1978, “Reliability of non-destructive inspection”, SA-ALC/MME. 76-6-38-1, San Antonio, TX.

Manuel, L., 2002, “CE 384S - Structural reliability course: Class notes”, Department of Civil Engineering, The University of Texas at Austin, Austin, TX.

Mattos, D. F. V. et al., 2009, “F-5M DTA Program”. Journal of Aerospace Technology and Management. Vol.1, Nº1, pp. 113-120.

Mello Jr, A.W.S. et al., 2009, “Geração do ciclo de tensões para análise de fadiga, Software GCTAF F-5M”, RENG ASA-I 04/09,IAE, São José dos Campos, S.P., Brasil.

Provan, J. W., 2006 ,“Fracture, fatigue and mechanical reliability: An introduction to mechanical reliability”, Department of Mechanical Engineering, University of Victoria, Victoria, B.C.

USAF., 1974, “Airplane damage tolerance requirements”. Military Specification. Washington, DC. (MIL-A-83444).

USAF., 1974, “Airplane strength and rigidity reliability requirements, repeated loads and fatigue”. Military Specification. Washington, DC. (MIL-A-008866).

USAF., 2005, “Aircraft structural integrity program, airplane requirements”. Military Specification. Washington, DC. (MIL-STD-1530C).

Page 83: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 84: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 211

Fernando Pereira de OliveiraETEP College

São José dos Campos - [email protected]

Marcos Daisuke Oyama*Institute of Aronautics and Space

São José dos Campos - [email protected]/

[email protected]

*author for correspondence

Radiosounding-derived convective parameters for the Alcântara Launch CenterAbstract: Climatological features of convective parameters (K index, IK; 950 hPa K index, IK950; Showalter index, IS; lifted index, ILEV; total totals index, ITT; and convective available potential energy, CAPE) derived from 12 UTC radiosounding data collected at the Alcântara Launch Center (CLA; 2°18’S, 44º22’W) from 1989 to 2008 were computed. The parameters IK, IK950, IS and ITT (ILEV and CAPE) showed a seasonal variation coherent (not coherent) to the annual cycle of precipitation. Interdaily variability was high all year round and was comparable to the monthly average seasonal variation. For IK and IK950, the monthly fraction of days with favorable conditions for precipitation occurrence (FRAC) showed good agreement with the monthly fraction of days with precipitation greater than 0.1 mm (PRP). For ITT and IS, the seasonal variation of FRAC was lower than the seasonal variation of PRP; for ILEV and CAPE, there were marked differences between FRAC and PRP. IK seasonal variation was primarily due to the presence of a deeper (shallower) low-level moist layer in the wet (dry) season. Among the studied convective parameters, the use of IK or IK950 for assessing precipitation occurrence was recommended.Keywords: Storms, Tropical regions, Clouds, Climatology.

INTRODUCTION

Radiosoundings are used to obtain vertical profiles of thermodynamic (pressure, temperature and humidity) and dynamic (horizontal winds) variables from the surface up to the lower stratosphere. From these data, convective parameters (or instability indices) can be calculated and their actual values can be compared to threshold values related to the probability of rainfall occurrence. For instance, the convective available potential energy (CAPE), which is a widely used convective parameter in theoretical, observational and modeling studies (e.g., Emanuel, 1994), measures the buoyancy force potential energy for an undiluted pseudo-adiabatic ascent of a moist air parcel from the level of free convection to the level of neutral buoyancy. Values of CAPE higher than the threshold value of 1000 J kg-1 are usually regarded as high (Nascimento, 2005) and associated to atmospheric thermodynamic conditions favorable to precipitation occurrence. Therefore, convective parameters provide useful information about the atmospheric column potential for deep convection and also complement the precipitation forecast from numerical atmospheric models.

There are few studies on convective parameters for regions in Brazil (e.g., Souza et al., 2001; Fogaccia and Pereira Filho, 2002; Fisch et al., 2004; Domingues et al., 2004; Nascimento and Calvetti, 2004; Barbosa and Correia, 2005; Nóbile Tomaziello and Gandu, 2008; Nunes and Escobar, 2008; Silva et al., 2008). Although convective parameters are routinely used by operational weather forecasters, there is still a lack of studies to support and verify the use of standard threshold values that were not necessarily derived from atmospheric conditions similar to those found in Brazil (Nascimento, 2005).

Received: 01/10/09 Accepted: 26/10/09

LIST OF SYMBOLS

CLA Alcântara Launch CenterIK K indexIK950 K index for 950 hPaIS Showalter indexILEV Lifted indexITT Total totals indexCAPE Convective available potential energyT● Temperature at ● hPa levelTd● Dew point temperature at ● hPa levelDep● Dew point depression at ● hPa levelTpsfc→● Temperature at ● hPa of an average air parcel over the first 500 m from the surface lifted to ● hPa by an undiluted pseudo-adiabatic processTp850→● Temperature at ● hPa of an air parcel at 850 hPa lifted to ● hPa by an undiluted pseudo-adiabatic processg Gravity at sea levelTp Parcel temperature (for CAPE definition)Ta Environment temperaturez HeightLFC Level of free convectionLNB Level of neutral buoyancyFRAC Monthly fraction of days when the convective

parameter value is favorable to precipitation occurrence

PRP Monthly fraction of days with precipitation greater than 0.1 mm

Page 85: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009212

Oliveira, F.P., Oyama, M.D.

In this work, the climatological features of some widely used convective parameters are obtained for a specific region located at the northern coast of Northeast Brazil: the Alcântara Launch Center (CLA; 2°18’S, 44º22’W), (Marques and Fisch, 2005). The climatology of convective parameters could be useful for weather forecasting and nowcasting during rocket launching missions at CLA. In CLA, surface and upper-air meteorological data are collected on a regular basis for climatological studies (e.g., Marinho et al., 2009).

DATA AND METHODOLOGY

The climate in CLA is tropical humid (IBGE, 2002). Monthly precipitation shows a marked seasonal variation (Figure 1): austral autumn (spring) is the wet (dry) season, and austral summer and winter are the transition seasons (Pereira et al., 2002; Barros, 2008). Maximum (minimum) values of monthly precipitation are found in March and April (September and October).

surface). This procedure aims at attenuating the excessive sensitivity of CAPE and ILEV to small variations in surface values (Manzato, 2003). The pseudo-adiabatic ascent is based on unsaturated potential temperature conservation from the surface to the lifting condensation level (LCL) and on equivalent potential temperature conservation (as defined by Bolton, 1980) from the LCL upwards. Equivalent potential temperature conservation is attained iteratively from two initial limit ascents: dry adiabatic and isothermal.

The time series of the convective parameters have gaps due to missing radiosoundings or removed radiosoundings in the pre-processing procedure; no gap filling procedure is used.

The standard and commonly used threshold values for rainfall occurrence (i.e., favorable conditions for rainfall occurrence) are given in Table 1. They are based on Comando da Aeronáutica (2004) and Nascimento (2005). For IK950, no threshold value has yet been proposed in the literature; so, it is assumed that the IK950 threshold value is 10°C higher than IK’s, since IK950 is on average 10°C higher than IK (Corrêa, 2007).

CLIMATOLOGY

The annual cycle of the convective parameters is shown in Figure 2. Seasonal change from wet to dry season is found for all parameters, although only IK, IK950, ITT and IS exhibit coherence to the precipitation cycle (Figure 1). The values for ILEV and CAPE are almost constant throughout the wet season, i.e., no expected maximum (CAPE) or minimum (ILEV) is found in March or April (when precipitation is maximum). For all parameters, interdaily variability is high all year round and comparable to the monthly average seasonal variation.

Validity of the threshold values given in Table 1 could be preliminarily evaluated by considering that the convective parameters’ annual average is a natural threshold value candidate because it splits the convective parameters distribution into two equal portions (one portion related to the wetter months; the other, to the drier ones). For IK950 and IK, a slightly lower threshold value (than that given in Table 1) could be more suitable; for IS and ITT, higher; and for ILEV and CAPE, much lower.

For a given convective parameter and month, FRAC is defined as the fraction of days when the convective parameter value is favorable to precipitation occurrence (according to the threshold values given in Table 1). To further check the validity of the threshold values given in Table 1, the annual cycle of FRAC for all convective parameters is compared to the annual cycle of PRP, defined as the fraction of days of a given month with precipitation higher than 0.1 mm (Figure 3). The results obtained show that:

Figure 1: Average and interannual standard deviation of monthly precipitation at CLA (mm) from Sep 1993 to Mar 2007. Source: Barros (2008).

Daily 12 UTC radiosounding data collected at CLA from 1989 to 2008 were used. The raw data undergo a pre-processing procedure to filter out spikes (data is regarded as spike when the absolute change between adjacent levels is greater than a prescribed variable-related threshold value), vertically interpolate the data (regular 25 hPa spacing between adjacent levels) and remove radiosoundings with less data than needed to appropriately calculate the convective parameters (e.g., radiosounding terminated prematurely, lack of humidity data, etc.).

For each pre-processed radiosounding, the following convective parameters are calculated: K index (IK), K index for 950 hPa (IK950), Showalter index (IS), lifted index (ILEV), total totals index (ITT) and CAPE. The definition of these parameters is given in Table 1, and follows Nascimento (2005) and Corrêa (2007). CAPE and ILEV are calculated by considering the ascent of an average air parcel over the first 500 m from the surface (instead of using an air parcel at the

Page 86: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 213

Radiosounding-derived convective parameters for the Alcântara Launch Center

Figure 2: Monthly average and interdaily standard deviation for IK (a), IK950 (b), IS (c), ITT (d), ILEV (e) and CAPE (f). Blue horizontal line refers to the threshold values given in Table 1.

Table 1: Convective parameters definition and threshold values for rainfall occurrence (see list of symbols). Source: Comando da Aeronáutica (2004), Nascimento (2005) and Corrêa (2007).

Convective Parameter Definition Condition for rainfall occurrenceK index (IK) T850 – T500 + Td850 – Dep700 > 30ºC950 hPa K index (IK950) T950 – T500 + Td950 – Dep700 > 40°CShowalter index (IS) T500 – Tp850→500 < 1ºCLifted index (ILEV) T500 – Tpsfc→500 < 0ºCTotal totals index (ITT) T850 + Td850 – 2 × T500 > 40ºC

Convective available potential energy (CAPE) g

Tp− T

a

Ta

dzLFC

LNB

∫ > 1000 J kg-1

• For IK and IK950, there is good agreement between FRAC and PRP (Figure 3a). The slight underestimation of FRAC for IK (with respect to PRP) could be fully corrected if a slightly lower threshold value was used (as stated previously).

• For ITT and IS, FRAC shows good agreement compared to PRP in only one season (wet season for ITT and dry season for IS) because the seasonal variation of both parameters is not sufficiently large

(Figure 3b). Threshold values changes could shift FRAC curve up or down (for instance, by using 42°C as threshold value for ITT, and 2°C for IS, FRAC curves for both parameters become almost the same) but are not able to enlarge the seasonal variation. Therefore, good agreement between FRAC and PRP would not be possible by solely changing the threshold values.

• For ILEV and CAPE, differences between FRAC and PRP are more pronounced (Figure 3c). FRAC for

Page 87: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009214

Oliveira, F.P., Oyama, M.D.

results of the previous section confirmed that IK (as well as IK950, which may be regarded as a small variation of IK) would be effective to predict precipitation occurrence. Since IK is able to represent the seasonal cycle of PRP, a deeper analysis of the terms that compose IK is carried out to unravel the thermodynamic conditions related to precipitation suppression during the dry season in CLA.

During the dry season, the Intertropical Convergence Zone (ITCZ) attains its northernmost position in the Northern Hemisphere, and its subsidence branch affects the northern part of Brazil (including the CLA) (Peixoto and Oort, 1992). Using Reanalysis data, Pereira Neto (2009) found that large scale subsidence strongly affects only the mid and upper levels; at the lower levels, upward motion is found and would be enough to initiate updrafts in cumulus clouds. Updraft velocities are normally one order of magnitude greater than large scale vertical velocity; thus, large scale subsidence over CLA would not be enough to inhibit updraft enhancement due to latent heat release, i.e., formation of deep cumulus clouds would be expected. However, the majority of clouds found over CLA in the dry season are shallow (stratocumulus) non-precipitating clouds; so, what are the processes that inhibit updraft enhancement and the development of deep clouds?

As shown in Table 1, IK is the sum of three terms. The first term (T850 – T500) represents the low-level lapse rate. The temperature difference remains almost constant throughout the year; the values for March (wettest month) and October (driest month) are almost the same (about 23.5°C) and, therefore, could not explain the IK seasonal variation (Figure 4a). The second term (Td850) is related to low-level moisture and it shows a seasonal variation of about 3°C (Figure 4b). The third term (Dep700) is related to low-level moist layer thickness and it shows a marked seasonal variation of about 7°C (Figure 4c). Therefore, the 10°C seasonal variation in IK (Fig. 2a) is due to a deeper (shallower) moist layer and greater (lower) low-level moisture in the wet (dry) season. The seasonal changes in low-level lapse rate are not relevant.

Deeper moist layer (or greater moist layer thickness) aids cumulus convection by providing moister environmental air for entrainment into updrafts, thus weakening the drag effect exerted by mass entrainment in updrafts (Holton, 1992). This is an important mechanism for deep convection over oceans (Sui et al., 1997). Therefore, in the dry season at CLA, the entrainment of drier low and mid-level environmental air into updrafts could limit cloud growth and lead to the formation of shallow clouds. Mid and upper level large scale subsidence could confine moisture to the lowest levels, thus reducing the low-level moist layer thickness. This tentative explanation for deep cloud suppression in the dry season at CLA integrates both dynamic and thermodynamic aspects of convection and need to be confirmed by further studies.

Figure 3: Solid lines: monthly fraction of days (%) when the convective parameter value is favorable to precipi-tation occurrence for IK and IK950 (a), ITT and IS (b), and ILEV and CAPE (c). Dashed line (PRP in all panels): monthly fraction of days (%) with daily pre-cipitation greater than 0.1 mm (source: Barros, 2008).

ILEV (CAPE) overestimates (underestimates) PRP all year round, and FRAC during the wet season is almost constant, i.e., do not show maximum values in March or April. As for ITT and IS, good agreement between FRAC and PRP would not be possible by solely changing the threshold values.

By using the degree of agreement between FRAC and PRP as criterion to single out the best convective parameters to predict precipitation occurrence, IK and IK950 could be strongly recommended; ITT and IS could be used for only one season (wet season for ITT and dry season for IS); and ILEV and CAPE should not be used.

FACTORS RELATED TO K INDEX SEASONAL VARIATION

IK is one of the most commonly used convective parameters by operational weather forecasters, and the

Page 88: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 215

Radiosounding-derived convective parameters for the Alcântara Launch Center

Figure 4: Monthly average and interdaily standard deviation for the terms of IK: (a) first term, T850 – T500; (b) second term, Td850; (c) third term, Dep700.

• IK seasonal variation was primarily due to the presence of a deeper (shallower) low-level moist layer in the wet (dry) season.

The climatology derived here could be useful for operational purposes and be regarded as reference values in studies for CLA. Also it may contribute to a more comprehensive characterization of the behavior of convective parameters for regions in Brazil.

In this work, only convective parameters were focused on for the sake of simplicity and conciseness. To evaluate the skill for predicting precipitation occurrence from convective parameters, a statistical analysis on a daily basis using not only radiosounding data but also precipitation data could be necessary. This issue, along with the characterization of convective parameters that use wind data information (e.g., Richardson number, Severe Weather Threat Index, etc.), will be addressed in a future work.

ACKNOWLEDGEMENTS

The first author was supported by an undergraduate research initiation grant from CNPq (PIBIC-IAE) from Aug 2006 to Dec 2007, when part of this work was carried out. The authors would like to thank Dr. Gilberto F. Fisch, Mr. Evandro de P. e Mello, Mr. Ieso de Miranda and Mr. José Marcos B. da Silveira for providing the data, and the meteorological team at CLA for collecting the radiosounding data. The comments and suggestions from two anonymous reviewers have helped improving the manuscript.

REFERENCES

Barbosa, T. F., Correia, M. F., 2005, “Sistemas Convectivos Intensos no Semi-Árido Brasileiro: o Controle da Grande Escala”, Revista Brasileira de Meteorologia, Vol.20, No. 3, pp. 395-410.

Barros, S.S., 2008, “Precipitação no Centro de Lançamento de Alcântara: Aspectos Observacionais e de Modelagem”, M.Sc. Dissertation, Instituto Nacional de Pesquisas Espaciais, São José dos Campos, S.P., Brazil, 115 p. (INPE-15319-TDI/1362).

Bolton, D., 1980, “The Computation of Equivalent Potential Temperature”, Monthly Weather Review, Vol.108, pp.1046-1053.

Comando da Aeronáutica, 2004, “Meteorologia Física I: Índices de Instabilidade”, CFOE MET. 7p.

Corrêa, C.S., 2007, “A Ocorrência de Fluxos no Perfil Vertical do Vento na Baixa Atmosfera e seu Efeito

CONCLUSIONS

Climatological features of convective parameters (IK, IK950, IS, ILEV, ITT and CAPE) derived from 12 UTC radiosounding data collected at the Alcântara Launch Center (CLA) from 1989 to 2008 were obtained. The main conclusions of this work were:

• IK, IK950, IS and ITT (ILEV and CAPE) showed a seasonal variation coherent (not coherent) to the annual cycle of precipitation. Interdaily variability was high all year round and comparable to the monthly average seasonal variation.

• For IK and IK950, FRAC showed good agreement with PRP. For ITT and IS, the seasonal variation of FRAC was lower than the seasonal variation of PRP. For ILEV and CAPE, there were marked differences between FRAC and PRP.

• Among the studied convective parameters, the use of IK or IK950 for assessing precipitation occurrence could be recommended.

Page 89: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009216

Oliveira, F.P., Oyama, M.D.

na Intensidade do Índice K”, Revista Brasileira de Meteorologia, Vol.22, pp.129-133.

Domingues, M. O., Mendes Jr, O., Chan, S. C., Sá, L. D. A., Manzi, A. O., 2004, “Análise das Condições Atmosféricas Durante a 2a Campanha do Experimento Interdisciplinar do Pantanal Sul Mato-Grossense”, Revista Brasileira de Meteorologia, Vol.19, No.1, pp.73-88.

Emanuel, K.A., 1994, “Atmospheric Convection”, Oxford University Press. 580p.

Fisch, G., Tota, J., Machado, L. A. T., Silva Dias, M. A. F., Lyra, R. F. F., Nobre, C. A., Dolman, A. J., Gash, J. H. C., 2004, “The Convective Boundary Layer Over Pasture and Forest in Amazonia”, Theoretical and Applied Climatology, Vol.78, pp.47-59.

Fogaccia, C. V. C., Pereira Filho, A. J., 2002, “Turbulência e Cisalhamento do Vento na Área do Aeroporto Internacional de São Paulo/Guarulhos”, Proceedings of the 12th Brazilian Congress of Meteorology, Foz do Iguaçu, Brazil.

IBGE, 2002, “Mapa Brasil Climas – Escala 1:5.000.000”.

Holton, J.R., 1992, “An Introduction to Dynamic Meteorology”, Academic Press, 3. Ed. 511 p.

Manzato, A., Morgan Jr., G., 2003, “Evaluating the Sounding Instability With the Lifted Parcel Theory”, Atmospheric Research, Vol.67-68, pp.455-473.

Marinho, L.P.B., Avelar, A.C., Fisch, G.F., Roballo, S.T., Gielow, R.G., Giradi, R.M., 2009, “Studies using wind tunnel to simulate the Atmospheric Boundary Layer at the Alcântara Space Center”, Journal of Aerospace Technology and Management, Vol.1, Nº1, pp. 91-98.

Marques, R. F. C., Fisch, G. F., 2005, “As Atividades de Meteorologia Aeroespacial no Centro Técnico Aeroespacial (CTA)”, Boletim da Sociedade Brasileira de Meteorologia, Vol.29, No. 3, pp. 21-25.

Nascimento, E. L., Calvetti, L., 2004, “Identificação de Condições Precursoras de Tempestades Severas no Sul do Brasil Utilizando-se Radiossondagens e Parâmetros Convectivos”, Proceedings of the 13th Brazilian Congress of Meteorology, Fortaleza, Brazil.

Nascimento, E. L., 2005, “Previsão de Tempestades Severas Utilizando-se Parâmetros Convectivos e Modelos de Mesoescala: Uma Estratégia Operacional Adotável no Brasil?”, Revista Brasileira de Meteorologia, Vol.20, No. 1, pp.121-140.

Nóbile Tomaziello, A. C., Gandu, A. W., 2008, “Índices de Instabilidade e Tempestades Severas na Região Metropolitana de São Paulo”, Proceedings of the 15th Brazilian Congress of Meteorology, São Paulo, Brazil.

Nunes, A. B., Escobar, G., 2008, “Análise dos Índices de Estabilidade dos Eventos Severos de 2006-2007 na Cidade de São Paulo”, Proceedings of the 15th Brazilian Congress of Meteorology, São Paulo, Brazil.

Peixoto, J.P., Oort, A. H., 1992, “Physics of Climate”, American Institute of Physics, 520p.

Pereira, E. I., Miranda, I., Fisch, G. F., Machado, L. A. T., Alves, M. A. S., 2002, “Atlas Climatológico do Centro de Lançamento de Alcântara”, IAE, São José dos Campos, S.P., Brazil. (ACA/RT-01/01, GDO-000000/B0047)

Pereira Neto, A.V., 2009, Private Communication.

Silva, L. M., Mota, M. A. S., Sá, L. D. A., 2008, “Análise da Variabilidade da Altura da Camada de Mistura (CM) e da Energia Potencial Convectiva Disponível (CAPE) Durante os Experimentos CIMELA e COBRA-PA, Realizados na Flona de Caxiuanã, Pará”, Proceedings of the 15th Brazilian Congress of Meteorology, São Paulo, Brazil.

Sui, C. H., Lau, K. M., Takayabu, Y. N., Short, D. A., 1997, “Diurnal Variations in Tropical Oceanic Cumulus Convection during TOGA COARE”, Journal of the Atmospheric Sciences, Vol. 54, pp.639-655.

Souza, E. P., Leitão, M. M. V. B. R., Barbosa, T. F., 2001, “Características da Atmosfera Superior, a Partir de Dados de Alta Resolução Obtidos à Superfície”, Revista Brasileira de Engenharia Agrícola e Ambiental, Vol. 5, No. 3, pp. 463-468.

Page 90: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 217

Edson Cocchieri Botelho*São Paulo State University

Guaratinguetá-Brazil [email protected]

Rogério Lago Mazur São Paulo State University

Guaratinguetá-Brazil [email protected]

Michelle Leali CostaSão Paulo State University

Guaratinguetá-Brazil [email protected]

Geraldo Maurício CândidoInstitute of Aeronautics and Space

São José dos Campos- [email protected]

Mirabel Cerqueira Rezende Institute of Aeronautics and Space

São José dos Campos- [email protected]

*author for correspondence

Fatigue behaviour study on repaired aramid fiber/epoxy compositesAbstract: Aramid fiber reinforced polymer composites have been used in a wide variety of applications, such as aerospace, marine, sporting equipment and in the defense sector, due to their outstanding properties at low density. The most widely adopted procedure to investigate the repair of composites has been by repairing damages simulated in composite specimens. This work presents the structural repair influence on tensile and fatigue properties of a typical aramid fiber/epoxy composite used in the aerospace industry. According to this work, the aramid/epoxy composites with and without repair present tensile strength values of 618 and 680MPa, respectively, and tensile modulus of 26.5 and 30.1 GPa, respectively. Therefore, the fatigue results show that in loads higher than 170 MPa, both composites present a low life cycle (lower than 200,000 cycles) and the repaired aramid/epoxy composite presented low fatigue resistance in low and high cycle when compared with non-repaired composite. With these results, it is possible to observe a decrease of the measured mechanical properties of the repaired composites.Keywords: Fatigue behavior, Aramid/epoxy composite, Structural composites, Mechanical behavior.

INTRODUCTION

In recent years, fiber-reinforced composites have gained much attention due to their use in aerospace, marine, automobile, medical and other engineering industries. Among thermoset polymers, epoxy resins are the most common matrices for high performance aramid-fiber composites due to their easy processing conditions (Botelho et al., 2002; Botelho et al., 2003; Botelho et al., 2005a; ABARIS, 1998; Cerny et al., 2000).

The continuous use of structural polymer composites in the aeronautical industry has required the development of repairing techniques of damages found in different types of composites. The first step of a repair procedure is to determine the extent of the damage sustained by the structure. One must always assume that the actual damage can be more extensive than the visible damage. This is especially true for aramid fiber-reinforced composites made with brittle standard cured epoxy resins (177°C cured epoxy matrix resins). After an impact with a foreign object, there is generally, but not invariably, some visual indication in the form of paint damage. However, due to the elasticity of high modulus fibers, the composite often springs back, leaving residual subsurface damage in the form of broken fibers, ply splitting and, in the case of sandwich panels, crushed core and disbanded face sheets

(Ashcroft et al.,2001; Kawai et al., 2001; Roudet et al., 2002; Gregory et al., 2005; Botelho et al., 2005b).

A similar fatigue damage tolerance mechanism may maintain the inherent properties of the repaired aramid fiber/epoxy composites when compared with non-repaired aramid composite. Fatigue damage results in a change of strength, stiffness and other mechanical properties of composite material. Damage phenomena under various loading conditions are significantly different for polymeric composites. Damages can occur by: crack formation due to fiber breakage, matrix crack propagation, fiber-matrix debonds, void growth and delamination. Any one or a combination of these mechanisms may lead to a reduction of the overall modulus and strength. Therefore, fatigue failure is a progressive process during which the overall modulus and strength decrease progressively until their values cannot longer resist the applied loading and hence total failure occurs (Botelho et al., 2005a; Ashcroft et al., 2001; Kawai et al., 2001; Roudet et al., 2002; Gregory et al., 2005).

In many fatigue studies, the fatigue performance of materials is analyzed by investigating the relationship between the fatigue load, either applied stresses or applied strain, and the fatigue life (or number of cycles to failure). The applied fatigue stress can be expressed as the maximum fatigue stress. This normalized applied stress is the ratio of the maximum fatigue stress to the Received: 19/05/09

Accepted: 28/09/09

Page 91: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009218

Botelho, E. C. et al.

ultimate quasi static stress or strength of the composite. The normalized applied stress is often used to compare two or more materials with different values of ultimate tensile stress (Ashcroft et al., 2001; Kawai et al., 2001; Roudet et al., 2002; Gregory et al., 2005).

The objective of the present study is to evaluate the effects of the fatigue behavior on repaired aramid fiber/epoxy composites. Mechanical tests were performed in order to verify possible degradation on static mechanical properties, before the specimens undergo fatigue experiments. The stress as a function of the number of fatigue cycles (S-N curve) is then obtained. Also, the specimens were analyzed by microscopic techniques before and after the mechanical experiments.

MATERIALS AND EXPERIMENTAL PROCEDURE

Aramid fiber fabric/epoxy (AF/E) prepreg was used for the composite preparation. In this work plain weave fabric style was used (each fiber cable presented 3,000 monofilament). The composite was prepared by using an autoclave system. The fiber content in each composite was of approximately 60% (v/v).The epoxy resin used has the number F584 as specification, manufactured by Hexcel Company and it is a structural resin with a cure temperature of up to 181°C and glass transition temperature of 154°C (ABARIS, 1998).

The composites were cured in autoclave, under a pressure of 0.69 MPa and vacuum of 0.08 MPa, following a heating cycle of up to 181°C. The aramid fiber/epoxy composites obtained were divided into two batches. The first batch of these composites was used as a reference material. The second batch of the continuous fiber/epoxy laminates was cut and machined. Figure 1 shows this process, where the cut used to simulate the removing of the damaged part of the specimens (20 mm x 200 mm) can be observed. After this procedure, the same aramid fiber/epoxy prepregs, used in the original laminate preparation, are carefully stacked in the damaged region using the scarf technique (ABARIS, 1998) in order to repair the laminate.

The cross section micrographs of the studied composites were obtained by optical microscopy (OM) in order to evaluate how homogeneous was the lamination and to examine in detail the specimen after the mechanical tests. The morphological evaluation was performed using a Nikon Epiphot 200 equipment. Measurements of tensile properties of aramid fiber fabric composites with and without repair were performed under ASTM standard D3039-93 normative (ASTM, 1985a). The tensile tests were carried out in an Instron machine 8801. The extensometer device was attached to the specimen to measure displacements in longitudinal direction. Fatigue

tests were performed using a servo-hydraulic machine (25 kN) at constant load amplitude. Fatigue tests were carried out according to ASTM 3479 (ASTM, 1985b) and the stress ratios, Smax (smin/smax) was 0.1, where smax and smin are the maximum and the minimum applied stresses, respectively, and sult is the ultimate strength of the composites. The fatigue frequency was 10 Hz. Glass fiber/epoxy end tabs with a length of 40 mm were attached at both ends of the specimens to avoid failure around the gripping device during the tests. The specimens were cycled up to 1,000,000 and in this work the same fatigue tensile value in both laminates were used and the fatigue life was measured.

RESULTS AND DISCUSSION

Figure 2 depicts a representative optical microscopy of the repaired aramid fiber/epoxy cross-sections showing the repaired area performed in this composite. The repairing technique used induces small resin rich regions in the

a)

b)

Figure 1: Details of the cut and machined area of the laminate used to simulate a damage to be repaired (a) and scheme of the scarf repair used (b).

Page 92: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 219

Fatigue behaviour study on repaired aramid fiber/epoxy composites

composite. Tensile and fatigue tests were conducted in order to evaluate this possible decrease in the mechanical properties.

due to the heterogeneity of the resin and discontinuity of the reinforcement on the repaired area. Hence, by using this repair technique, it is possible to reconstitute up to 90% of the original properties of the aramid fiber/epoxy composite. In spite of such tensile stress reduction, it is observed, in this work, that the ultimate tensile strain value is almost the same for both non-repaired and repaired specimens (~3% mismatch). Additionally, a decrease of ~12% in the tensile modulus values is observed in non-repaired specimens, confirming the results obtained by ultimate tensile stress.

Figure 3 presents the S-N fatigue curves for repaired and non-repaired aramid fiber/epoxy composites. In this experiment, it should be mentioned that in all specimens

Repair area

Figure 2: Optical microscopy of the repaired area of aramid fiber/epoxy composite.

Table 1 presents the experimental tensile properties for non-repaired and repaired aramid fiber/epoxy composites. The tensile properties of non-repaired aramid fiber/epoxy composites showed good agreement (6% mismatch) with the results available in the literature, around 720 MPa for non-repaired and 660 MPa for repaired specimens (Botel ho et al., 2005a; ABARIS, 1998a; Cerny et al., 2000; Ganczakowski and Beaumont, 1989). The differences between experimental and literature results are expected for polymer composites since the interface effect or void presence can be induced during different processing conditions (Ganczakowski and Beaumont, 1989).

Table 1: Tensile properties for the specimens studied.

Material Non-repaired composite

Repaired composite

Tensile stress (MPa) 680±37 618±32Tensile strain (%) 1.37±0.09 1.34±0.07Elastic Modulus (GPa) 30.1±1.1 26.5±2.1

According to the results presented in Table 1, repaired composites present a decrease of around 10% on tensile stress when compared with non-repaired composites,

plain weave textile was used, therefore, in 0° and 90° the load will be almost the same.

By means of Figure 3, it can be observed that in both cases, at low and high number of cycles (using the same frequency value), the repaired aramid fiber/epoxy composites show a decrease in the fatigue life values, by around 10% (low cycle) and 18% (high cycle), when compared with non-repaired composites. At a low number of cycles (lower than 200,000.00 cycles), the non-repaired composite reached fatigue resistance values between 170 and 220 MPa and for the repaired composites these values were between 160 to 205 MPa. For a high number of cycles, (higher than 200,000.00 cycles) these values are lower when compared with those found in low cycles, with 150 to 170 MPa for the non-repaired aramid fiber/epoxy composites and 130 to 150 MPa for the repaired composites. According to Figure 3 both composites present a similar behavior.

repairednon repaired

220

200

180

160

140

1200,0 2,0x105 4,0x105 8,0x1056,0x105 1,0x106

fatigue life (cycles)

tens

ion

(MPa

)

Figure 3: Fatigue performance of non-repaired and repaired ara-mid fiber/epoxy composites.

Page 93: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009220

Botelho, E. C. et al.

According to the results presented in Figure 4, it is observed that when fatigue tests are performed at high and low number of cycles, the repaired specimens can be affected by void rich regions created during repair. These voids are responsible for delaminations but, due to the low loads, the composite did not present catastrophic fracture but can most likely be affected by debonding (ABARIS,1998). The debonding occurred randomly in the specimen before the rupture, but parallel to the fatigue loading direction (ABARIS,1998; Ganczakowski and Beaumont,1989). When this kind of debonding propagation occurs, fatigue damage can be concentrated in one particular region of the specimen. As a consequence, that region will become weaker and critical.

of fatigue cycles. Debonding can occur randomly in the specimen, but mainly parallel to the fatigue loading direction. As a consequence, the debonded regions became weaker and critical.

The results in this study demonstrate that the repaired aramid fiber/epoxy composites show a decrease in the fatigue resistance values, of approximately 10% (low cycle) and 18% (high cycle), when compared with the non-repaired. Thus, this repair process can be used in aerospace applications. These results can be associated to the good morphological aspects (good interface and no voids and cracks) and the mechanical behavior when both laminates composites are compared, which shows that a decrease of 12.1% in the tensile modulus values is observed in non-repaired specimens, corroborating the results obtained by fatigue tests.

ACKNOWLEDGEMENT

The authors acknowledge the financial support received from FAPESP (grants 05/54358-7) and CNPq. The authors are also indebted to Mr. Manuel Francisco S. Filho for his help in the fatigue tests and to Sérgio Mayer from EMBRAER for helping to produce the composites.

REFERENCES

ABARIS, 1998, “Training Advanced Composite Structures: Fabrication and Damage Repair”, Abaris Training Resources Inc.

ANNUAL AMERICAN STANDARD TEST METH-ODS, 1985, “ASTM-D 3039-76: Standard Test Method for Tensile Properties of Fiber-Resin Composites”, Philadelphia, PA.

ANNUAL AMERICAN STANDARD TEST METHODS, 1985, “ASTM-D 3479: Standard Test Method for Fatigue of Fiber-Resin Composites”, Philadelphia, PA.

Ashcroft, I.A., et al., 2001, “The Effect of Environment on the Fatigue of Bonded Composite Joints. Part 1: Testing and fractography”, Composites, Part A, Nº. 32, pp. 45-58.

Botelho, E. C., Nogueira, C. L., Rezende, M. C., 2002, “Monitoring of Nylon 6.6/Carbon Fiber Processing by X Ray Diffraction and thermal Analysis”, Journal of Applied Polymer Science, Nº. 86, pp. 3114-3121.

Botelho, E.C., Lauke, B., Figiel, L., Rezende, M.C., 2003, “Mechanical Behavior of Carbon Fiber-Reinforced Polyamide Composites”, Science and Technology, Nº. 63, pp.1843-1855.

Figure 4: Optical microscopy of aramid fiber/epoxy composites after high cycle fatigue test: a) non-repaired; b) repaired

a)

b)

CONCLUSION

Aramid fiber/epoxy repaired composites presented void rich regions in the repaired area, creating delaminations, when assessed under fatigue at high and low number

Page 94: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 221

Fatigue behaviour study on repaired aramid fiber/epoxy composites

Botelho, E. C., Pardini, L.C., Costa, M.L., Rezende, M.C., 2005, “Hygrothermal Effects on Viscoelastic Behavior of Glass Fiber/Epoxy Composites”, Journal of Materials Science, Nº. 40, pp. 3615-3623.

Botelho, E. C., Pardini, L. C., Rezende, M. C., 2005, “Hygrothermal Effects on Metal/Glass Fiber/Epoxy Hybrid Composites”, Materials Science & Engineering, Nº. 399, pp. 190-198.

Cerny, M., Glogar, P., Manocha, L.M., 2000, “Resonant Frequency Study of Tensile and Shear Elasticity Moduli of Carbon Fiber Reinforced Composites – CFRC”, Carbon, Nº. 38, pp. 2139-2149.

Ganczakowski, H. L., Beaumont, P. W. R., 1989, “The Behavior of Kevlar Fibre-Epoxy Laminates under Static and Fadigue Loadings Part I – Experimental”, Composites Science and Technology, Vol.36 , Nº. 4, pp. 345-354.

Gregory, J.R., Spearing, S.M., 2005, “Constituent and Composite Quasi-Static and Fatigue Fracture Experiments”, Composites - Part A, Nº. 36, pp. 665-674.

Kawai, M., Yajima, S., Hachinohe, A., Kawase, Y., 2001, “High-Temperature off-axis Fatigue Behavior of Unidirectional Carbon-Fibre Reinforced Composites with Different Resin Matrices”, Composites Science and Technology, Nº. 61, pp. 1285-1302.

Roudet, F., Desplanques, Y., Degallaix ,S., 2002, “Fatigue of Glass/Epoxy Composite in Three-point-bending with Predominant Shering”, International Journal of Fatigue, Nº. 24, pp. 327-337.

Page 95: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 96: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 223

Juliano Libardi*State University of Campinas

[email protected]

Sérgio P. RavagnaniState University of Campinas

[email protected]

Ana Marlene F. MoraisInstitute of Aeronautics and Space

São José dos Campos- [email protected]

Antonio Roque CardosoInstitute of Aeronautics and Space

São José dos Campos- [email protected]

* author for correspondence

Study of plasticizer diffusion in a solid rocket motor´s bondlineAbstract: This work aims to determine the diffusion coefficient of the plasticizers dibutyl phthalate (DBP), dioctyl phthalate (DOP) and dioctyl azelate (DOZ) on the internal insulating layer of solid rocket motors. These plasticizers are originally present in the layers of rubber, liner and propellant, respectively. This species are not chemically bonded and tend to diffuse from propellant to insulating and vice versa. A computer program based on the mathematical model of Fick’s second Law of diffusion was developed to perform the calculus from the concentration data obtained by gas chromatographic (GC) analyses. The samples were prepared with two different adhesive liners; one conventional (LHNA) and the other with barrier properties (LHNT). A common feature of both liners was that they were synthesized by the reaction of hydroxyl-terminated polybutadiene (HTPB) and diisocyanates. However, a bond promoter was used to increase the crosslink density of the LHNT liner and to improve its performance as barrier against the diffusion. The effects of the diffusion of the plasticizers were also investigated by hardness analyses, which were executed on samples aged at room temperature and at 80ºC. The results showed an increase trend for the samples aged at room temperature and an opposite behavior for the tests carried out at 80ºC.Keywords: Fick’s Law, Diffusion, Bondline, Solid rocket propellant, Thermal insulation, Liner, Plasticizer, Hardness, Gas chromatograph.

NOMENCLATURE

Al Aluminum powderAP Ammonium perchlorateASTM American Society For Testing And MaterialsC Mass concentrationMeq Mass concentration at the equilibrium Ml Final mass concentration C0 Initial mass concentrationCt Mass concentration in a time tCG Gas CromatographD Diffusion coefficient DBP Dibutyl phthalate DOP Dioctyl phthalateDOZ Dioctyl azelateHTPB Hydroxyl terminated polybutadieneIAE Institute of Aeronautics and SpaceIPDI Isophorone diisocyanatel Thickness LHNA Conventional adhesive liner LHNT Adhesive liner with barrier propertiesMAPO Trimethylaziridinylphosphine oxideMS Mass spectographyNB 7113 Thermal insulation based on Nitrilic RubberR1 Propellant layer at 3 mm from interfaceR2 Propellant layer at 25 mm from interfaceR3 Propellant layer at 55 mm from interface

t TimeTDI 2.4-toluene diisocyanatex Normal coordinate to cross section z Plane region of a sample

INTRODUCTION

The solid rocket motor is comprised of a combustion chamber filled with a solid composite propellant. To protect the interior of the chamber against the high temperatures generated during the combustion an insulating rubber is bonded to the internal wall of the vessel. The propellant is casted into the motor and bonded to the rubber by a thin layer of adhesive liner thus forming a “sandwich” system containing the layers of propellant, liner and rubber (Marsh, 1970; Sutton and Bilblarz, 1986; Rezende, 2001).

The term bondline is referred here to the interfaces of propellant/liner/insulator. The thin layer of liner prevents the separation of the bond system and can also act as a barrier to control the diffusion of mobile species in solid rocket motors (Byrd and Guy, 1985; Gercel et al., 2000).

Most of the solid composite propellant composition contains approximately 15 weight percent of a polymeric resin of hydroxyl terminated polybutadiene (HTPB), 80 weight percent of the solids: ammonium perchlorate (AP) and aluminum powder (Al) and five weight percent of additives as cure agents, burn catalysts, stabilizers,

Received: 04/09/09 Accepted: 06/10/09

Page 97: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009224

Libardi, J. et al.

plasticizers and etc. The amount of each ingredient can vary according to the application desired. The concentration of the plasticizers can represent up to 60% of the total additives (Paniker and Ninan, 1997; Bandgar et al., 2001; Folly and Mäder, 2004; Lourenço et al., 2006).

The plasticizers are used to act as a lubricant and to increase the flexibility of the polymeric chains, to improve the rheological properties during processing and to reduce the viscosity of the system. Otherwise, the plasticizer is not chemically bonded and can diffuse between the interfaces of the bondline formed in the “sandwich” system due to its concentration differences. In general, other free species which are not bounded to the matrix, such as burn agent catalysts and cure agent can also diffuse (Pröbster and Schmuker, 1986; Paniker and Ninan, 1997; Belhaneche-Bensemra et al., 2002; Gottlieb and Bar, 2003; Marcilla et al., 2004; Grythe and Hansen, 2007).

During the storage period the propellant suffers a natural process of deterioration defined as aging. The main mechanisms that govern the aging process are the diffusion and oxidation of the polymeric matrix which can occur at room temperature or can be accelerated by the increase of the temperature during storage (Celina et al., 2000; Hocaoğlu et al., 2001; Judge, 2003; Dilsiz and Ünver, 2006).

The diffusion process of the plasticizers can cause degradation of the adhesion in the interfacial layers, change the mechanical properties of the propellant and can affect the performance of the rocket motor (Byrd and Guy, 1985; Pröbster and Schmuker, 1986, Gottlieb and Bar, 2003).

In this work two different types of adhesives are used to bond the propellant to the rubber. The compositions of both liners are based on the HTPB binder. The liner identified as LHNA contains the plasticizer dibutyl phthalate (DBP) in its chemical formulation and is cured with the 2.4-toluene diisocyanate (TDI). The liner identified as LHNT has a higher crosslink density than LHNA due to the addition of a bond promoter, is cured with the isophorone diisocyanate (IPDI) and does not have any plasticizer in its composition.

The purpose of this study is to calculate the diffusion coefficients of the plasticizers in the insulation layer of samples prepared with LHNT and LHNA liners using the mathematical model of Fick. We also report the results of the hardness tests of samples submitted to natural and accelerated aging.

EXPERIMENTAL

The LHNT adhesive was developed in the Chemistry Division of the Institute of Aeronautics and Space (IAE). The bond promoter trimethylaziridinylphosphine oxide

(MAPO) was used to increase the crosslink density of this liner to prevent the diffusion of the mobile species between the insulation and propellant and vice versa (Gercel et al., 2000).

The plasticizers dioctyl azelate (DOZ), dioctyl phthalate (DOP) and dibutyl phthalate are, respectively, present in the composition of the propellant, rubber and liner. The determination of the diffusion coefficients of this species in the insulation layer of samples prepared with the LHNT and LHNA was executed by a computer program based on Fick’s second law. The software, developed for this work, used the concentration data from chromatographic analyses obtained up to 31 days after the curing period from samples aged at 80°C.

This interval was established based on previous observations carried out with samples aged at 50ºC, at which time the results obtained showed that the diffusion process reached the equilibrium at approximately 50 days after the curing period. Moreover, the softening of the propellant near to the interface was also verified. Then, based on previous observations the hardness tests were performed on different regions of the propellant to confirm the occurrence of the softening.

The diffusion phenomenon on propellant/liner/insulation rubber layers occurs due to the concentration differences between these regions. The diffusion system can be described by Fick’s second law of diffusion (Crank, 1957), represented by the following equation

(1)

Considering the diffusion in one direction z of a plane sheet:

(2)

Considering the region –l < z < l of one plane sample with 2l of thickness, assuming on (t=0) a constant concentration (C0) and on the surface area the concentration (C1).Thus, observing the following conditions:

(a) Initial condition:

to t = 0 and ∀ -l < z < l → C(l,0) = C0

Page 98: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 225

Study of plasticizer diffusion in a solid rocket motor´s bondline

Figure 1: Dimensions of the block of the propellant sample containing the propellant and insulation (rubber and liner) layers.

Figure 2: Scheme of the partition of the block of propellant sample.

(b) Boundary conditions:

to t > 0 and z = 0 → ∂C/∂z=0

And to t > 0 and z = l → C(l,t) = C1.

Appling the conditions above and using the method of separation of variables and denominating the mass concentrations as M we obtained the following equation:

(3)

Where Mt is the mass concentration on test layer in a period t of time, Meq is the mass concentration at equilibrium and D is the diffusion coefficient.

Equation (3) combined with Minimum Square, Newton Raphson and Gauss elimination methods were used to calculate the diffusion coefficient through the computational program in Fortran language developed for this work (Libardi, 2009).

Sample preparation

Metallic boxes with internal dimensions 130 x 130 x 65 mm (length x height x thickness) were used to prepare the samples. Firstly, the insulation rubber (NB7113) was placed into the box and, in sequence, an adhesive liner (LHNA), was applied over its surface. In the next stage, the box was filled with the propellant forming the interfaces of interest to this work and was submitted to the curing process at 50°C for seven days. The same procedure was executed for samples prepared with the LHNT liner. The sample block formed is shown in Fig. 1. The insulating layer is formed by both layers of rubber and liner. The HTPB-based propellant is cured with isophorone diisocyanate (IPDI) and its composition contains 84 weight percent of solids (aluminum and perchlorate ammonium) immersed in 15 weight percent of HTPB and 3.1 ± 0.04 weight percent of the plasticizer DOZ. The rubber contains 6.9 ± 0.13 weight percent of DOP and the liner 1.3 ± 0.03 weight percent of DBP.

Plasticizer extraction

Immediately, at the end of the curing process the block of the sample containing the layers of propellant/liner/rubber was removed from the metallic box. The sample was sliced into six pieces of 10 mm thickness each one, as Figure 2 shows, and aged at 80°C for 31 days. On days 1, 3, 7, 12, 20 and 31

one slice was removed from the oven and cooled up to room temperature, which in this work is referred to as the range of temperature between 24 and 27ºC. Then, the insulating layer (rubber and liner) were separated from the propellant. In sequence, this layer was fragmented into small squared pieces of approximately 5 x 5 mm in dimension. From these portions, 1g of the material was separated and transferred to the filter paper. In next step, this paper was placed into the Soxhlet extractor and filled with 150 mL of ethyl acetate. The process of extraction was carried out at 75ºC for 16 hours. The whole process was achieved in triplicate and for each replicate were executed ten extractions.

After the extraction the chromatographic analysis was conducted to determine the plasticizer mass concentration.

Gas Chromatograph

The chromatographic analyzes were performed using a Varian Gas Chromatograph (CG) with an ionization flame

Page 99: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009226

Libardi, J. et al.

detector and a Finningan Mass Spectroscopy (MS). The column utilized was a DB5 (5% phenyl methyl silicon) with diameter of 0.25 mm, 0.25 µm of porosity and 30 m in length. To execute the analyses, 1mL/min flow rate of nitrogen was used and 1µL of the sample was injected.

Shore A Hardness

The hardness tests were executed simultaneously in two identical groups of samples. One group was aged at 80ºC in commercial air-circulating oven with controlled temperature (± 1ºC) under ambient atmospheric conditions (~712 mm Hg). The other group was aged at room temperature (24 – 27ºC). The samples were not submitted to moisture control. On days 20, 27, 40 and 54 after the curing period, both groups were submitted to the Shore A analyzes.

The indentations were performed in three different regions of the propellant as shown in Figure 3. These regions were designated as R1, R2 and R3, and are located at 3 mm, 25 mm and 55 mm, respectively, measured from the composite interface with the liner.

and DBP exhibited opposite behavior (Fig. 5) since they diffused from insulating to propellant. The values found for all plasticizers in the first period (end of the cure) analyzed indicate that the diffusion of these species occurred during the cure of the propellant. The mass concentrations of the plasticizers determined in the samples prepared with the LHNT liner are smaller than the concentrations of the plasticizers determined in the samples prepared with the LHNA liner, as can be seen in both Figures 4 and 5. These results suggest the LHNT liner acts as a barrier against the diffusion of the plasticizers.

The diffusion coefficients of the plasticizers DOZ, DOP and DBP calculated by Fick’s mathematical model are exhibited in Tab.1. The values were determined from the concentration data obtained from the gas chromatographic analyses executed in the insulating layer of the samples aged up to 31 days after the curing period at 80ºC and prepared with both liners LHNA and LNHT.

Figure 3: Image of the propellant (gray layer) and the insulation (black layer). The dotted line indicates the regions R1, R2 and R3 submitted to the hardness tests.

A durometer with Shore A digital scale was used according to ASTM D 2240 – 05 (1995). Five indentations were executed in order to have consistent results

RESULTS AND DISCUSSIONS

Figures 4 and 5 shows the mass concentration data versus time of the plasticizer DOZ, DOP and DBP obtained by chromatographic analyses in the insulating layer at 80ºC.

Figure 4 shows that the DOZ plasticizer diffused from the propellant into the insulation layer. Otherwise, the DOP

Figure 4: DOZ mass concentration vs time at 80°C on the insulating layer for the samples prepared with the LHNA and LHNT.

Figure 5: DOP and DBP mass concentration vs time at 80°C on the insulating layer for the sample prepared with the LHNA and LHNT.

Table 1: Diffusion coefficients of DOP, DOZ and DBP determined on the insulating layer at 80ºC.

Liner Diffusion coefficient D x 107 (cm2/s)DOP DOZ DBP

LHNA 1.54 2.01 0.456LHNT 0.603 0.703 -

Page 100: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 227

Study of plasticizer diffusion in a solid rocket motor´s bondline

As observed in Table 1, the diffusion coefficient of the plasticizer DOZ on the insulating layer obtained from the samples prepared with the LHNT liner is lower than the coefficient obtained from the samples prepared with the LHNA liner in the same region. Firstly, this result shows that the DOZ diffused from the propellant into the rubber due to the concentration differences. The lower coefficient can be explained by the higher crosslink density of the liner LNHT that caused the reduction of the free volume between its molecules, thus diminishing the displacement of the plasticizers across the interface and consequently its diffusion coefficient.

It can be also observed in Table 1 that the diffusion coefficient of the DOP obtained with the samples prepared with the LHNT liner is lower than the coefficient obtained with the samples prepared with the LHNA liner. In this case, the DOP is originally present in the composition of the rubber and the barrier effect of the LHNT, due to its higher crosslinking density, prevented its diffusion into the propellant layer more effectively than the LHNA liner, as confirmed by the coefficients found. The plasticizer DBP is only present in the composition of the LHNA liner and its diffusion coefficient is lower than the DOZ and DOP on the insulating layer.

The experimental and simulated curves of diffusion of the DOZ, DOP and DBP are exhibited in Fig. 6 - 10. From these figures it is possible to verify good agreement between the theoretical and experimental curves, which validates the mathematical model of Fick applied to this study (Gottlieb and Bar, 2003).

The curves in Fig 11 and 12 were build from the results obtained of the indentations executed in regions R1, R2 and R3 of the propellant layer located, respectively, at 3 mm, 25 mm and 55 mm measured from an interfacial layer.

The results of the hardness analyses of the samples aged at room temperature are exhibited in Fig. 11 where is possible to verify an increase trend of the hardness with aging period for the three regions analyzed. The loss of the plasticizer to the insulation layer due to the process of diffusion predominantly causes the hardening of the propellant and influences the layer adhesion (Hocauğlu et al., 2001). The hardening of the HTPB based propellants during aging was attributed by Celina et al. (2000) as a consequence of an oxidative crosslinking of the binder

Figure 6: Experimental and simulated diffusion curves vs time for DOZ on the insulating layer at 80°C (LHNT).

Figure 7: Experimental and simulated diffusion curves vs time for DOZ on the insulating layer at 80°C (LHNA).

Figure 9: Experimental and simulated diffusion curves vs time for DOP on the insulating layer at 80°C (LHNA).

Figure 8: Experimental and simulated diffusion curves vs time for DOP on the insulating layer at 80°C (LHNT).

Figure 10: Experimental and simulated diffusion curves vs time for DBP on the insulating layer at 80°C (LHNA).

Page 101: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009228

Libardi, J. et al.

due to considerable unsaturation in the polymer structure and easy access of atmospheric oxygen.

It is also possible to observe in Fig. 11 that the values of the hardness determined in the regions R2 and R3 are both similar and higher than the values found in the region R1. The lower values in this region indicate the softening in the first 3 mm of the propellant. According to Byrd and Guy (1985), the diffusion of various substances can interfere with the propellant cure, producing a soft layer, hence resulting in a weak bond. The cure agent itself may diffuse out of the propellant before the crosslinking is complete. This phenomenon usually occurs within the first 5 mm of the propellant.

According to Kishore (1984) the moisture can reduce the tensile strength and the hardness of the propellant. Also according to Iqbal and Liang (2006) the water molecules do not react with the ingredients of the HTPB based propellant, however, at higher temperature the interaction between the polymer and the solid particles can degrade its mechanical properties.

The plasticizer diffusion in the bondline was observed in this work, which might explain the changes in the hardness at room temperature but it seems that at a higher temperature the effect of the moisture is more significant. In order to better understand this behavior, more specific studies are necessary.

CONCLUSION

The mathematical model of Fick applied in this work calculated the diffusion coefficients of the DOZ, DOP and DBP plasticizers with success. According to the concentration data it is possible to conclude that the diffusion process begins at the early stages of curing. The agreement between the simulated and experimental values validates this model.

The barrier effect of the LHNT liner, due to its higher crosslink density, was confirmed by the results of the diffusion coefficients of the plasticizers on the insulating layer.

The results of the hardness tests carried out with samples aged at room temperature and at 80ºC showed the softening of the propellant on the layer located at 3 mm from the bondline. During the aging an increasing trend of the hardness for the samples aged at room temperature and for the samples aged at 80ºC was observed and an opposite behavior was verified. The changes can cause damages mainly to the bondline, thereby affecting the performance and security of the rocket motor.

ACKNOWLEDGMENTS

The authors acknowledge the Brazilian Agency CNPq for financial support and the Division of Chemistry of the Institute of Aeronautics and Space (IAE).

REFERENCES

American Society For Testing And Materials, 1995, ASTM D 2240 - 05: “Standard Test Method for Rubber Property: Durometer Hardness,1-13 p.

Byrd, J. D., Guy, C. A., 1985, “Destructive Effects of Diffusing Species in Propellant Bond Systems”, Proceedings of AIAA/SAE/ASME/ASEE- 21st Joint Propulsion Conference, Monterey, July, pp.1438.

Figure 11: Hardness Shore A versus time for different regions of the propellant determined at room temperature.

The results of the hardness analyses of the samples aged at 80ºC are exhibited in Fig. 12. In this condition, lower values of the hardness in the region nearest to the interface than the values determined in the regions R2 and R3 were also found.

Figure 12: Hardness Shore A versus time for different regions of the propellant at 80°C.

The decreasing trend of the hardness in all extension of the sample at higher temperatures (Fig. 12) was not expected since the polymeric matrix was already formed in the period analyzed and its natural trend is to suffer hardening due to the loss of plasticizer and due to the oxidation process (Celina et al.,2000; Dilsiz and Ünver, 2006).

Page 102: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 229

Study of plasticizer diffusion in a solid rocket motor´s bondline

Bandgar, B. M., Krishnamurthy, V. N., Mukudan, T., Sharma, K. C., 2001, “ Mathematical Modeling of Rheological Properties of Hydroxyl-Terminated Polybutadiene Binder and Dioctyl Adipate Plasticizer”, Journal of Applied Polymer Science, Vol.85, pp. 1002-1007.

Belhaneche-Bensemra, N., Seddam C., Ouahmed S., 2002, “Study of the Migration of Additivies from Plasticized PVC”, Macromol. Symp. Vol. 180, pp. 191-201.

Crank, J., 1957, “Mathematics in Diffusion”, Claredon, Oxforf, 85 p.

Celina, M., Graham, A. C., Gillen, K, T., Assink, R, A., Minier, L. M., 2000, “Thermal Degradation Studies of a Polyurethane Propellant Binder”, Rubber Chemistry and Technology, Vol.73, pp. 779-797.

Dilsiz, N., Ünver, A., 2006, “Characterization Studies on Aging Properties of Acetyl Ferrocene Containing HTPB-Based Elastomers”, Journal of Polymer Science, Vol. 101, pp. 2538-2545.

Folly, P., Mäder, P., 2004, “Propellant Chemistry”, Chimia, Vol. 58, pp.374-382.

Gercel, B. O., Üner, D.O., Pekel, F., Özkar, S., 2001, “Improved Adhesive Properties and Bonding Performance of HTPB-Based Polyurethane Elastomer by Using Aziridine Type Bond Promoter” J. of Applied Polymer Science, Vol. 80, pp. 806-814.

Grythe, K. F., Hansen, F. K., 2007, “Diffusion Rates and the Role of Diffusion in Solid Rocket Motor Adhesion”, Journal of Applied Polymer Science, Vol. 103, pp. 1529-1538.

Gottlieb, L., Bar, S., 2003, “Analyzes of DOA Migration in HTPB/AP Composite Propellants”, Propellants, Explosives, Pyrotechnics, Vol.28, pp.12-17.

Haska, S. B., Bayramli, E., Pekel, F., Özkar, S., 1998, “Mechanical Properties of HTPB-IPDI-Based Elastomers”, J. Appl. Polym. Sci., Vol. 64, pp.2347-2354.

Kishore, K., Pai Verneker, V. R., Varghese, G., 1984, “DTA Studies on the Thermal Oxidation and Crosslinking Reactions of Carboxyl-Terminated Polybutadiene”, Polymer Science, Vol. 22, pp. 1481-1486.

Hocaoğlu, Ö., Özbelge, T., Pekel, F., Özkar, S., 2001, “Aging of HTPB/AP-Based Composite Solid Propellants, Depending on the NCO/OH and Triol/Diol Ratios, Vol. 79, pp. 959-964.

Iqbal, M. M., Liang, W., 2006, “Modeling the Moisture Effects of Solid Ingredients on Composite Propellant Properties”, Aerospace Science Technology, Vol.10, pp. 695-699.

Judge, M. D., 2004, “The Application of Near-Infrared Spectroscopy for the Quality Control Analysis of Rocket Propellant Fuel Pre-Mixes”, Talanta, Vol. 62, pp. 675-679.

Libardi, J., 2009, “Estudo do Fenômeno de Difusão de Plastificantes em Propelente Compósito Sólido à base de Polibutadieno Hidroxilado Utilizado em Motores Foguete” Tese Doutorado, Universidade Estadual de Campinas, S.P., Brazil, 45p.

Lourenço, V. L., et al., 2006, “Determinação da Distribuição de Funcionalidade de HTPB e Verificação de sua Influência no Comportamento Mecânico de Poliuretano Utilizado em Motor-Foguete”, Polímeros, Vol.16, No.1, pp. 66-70.

Marsh JR, H. E.,1970, “ Polymers in Space Research”, Ed. Marcel Dekker, New York, 365 p.

Marcilla, A., García, A., García-Quesada, J. C., 2004, “Study of the migation of PVC plasticizers”, J. Anal. Appl. Pyrolysis, Vol. 71, pp. 457-463.

Paniker, S. S., Ninan, K. N., 1997, “Influence of Molecular Weight on the Thermal Decomposition of Hydroxyl-Terminated Polybutadiene”, Termochimica Acta, Vol.290, pp.191-197.

Pröbster, M., Schmucker, R. M., 1986, “Ballistic Anomalies in Solid Rocket Motors Due to Migration Effects”, Acta Astronautica, Vol. 13, pp.599-605.

Rezende, L. C., 2001, “Envelhecimento de Propelente Compósito à Base de Polibutadieno Hidroxilado”, Tese Doutorado, Universidade Estadual de Campinas, S.P., Brazil, pp. 3-8.

Sutton,G. P., Biblarz, O., 1986, “Rocket Propulsion Elements”, Ed. John Wiley, New York, 231 p.

Page 103: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 104: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 231

Luiz Cláudio Pardini*Institute of Aeronautics and Space

São José dos Campos- [email protected]

Adriano GonçalvesInstitute of Aeronautics and Space

São José dos Campos- [email protected]

* author for correspondence

Processamento de compósitos termoestruturais de carbono reforçado com fibras de carbonoResumo: O presente trabalho descreve os processos de obtenção de compósitos termoestruturais de carbono reforçado com fibras de carbono. O processamento desses materiais tem início pela definição de uma arquitetura do reforço de fibras de carbono, seja na forma de empilhamento simples do reforço, de tecidos ou na forma de reforço multidirecional. A incorporação de matriz carbonosa no reforço de fibras, pelo preenchimento de vazios e interstícios, promove a densificação do material, e o incremento de massa específica. Duas rotas de processamento são predominantes na obtenção desses materiais, o processo via impregnação líquida e o processo via impregnação em fase gasosa. Em ambos os casos, processos térmicos levam à formação de matriz de carbono com propriedades específicas, que derivam de seus materiais precursores. Os processos diferem entre si, também, pelo rendimento, enquanto os processos executados por impregnação líquida apresentam rendimento de, aproximadamente, 45%, os processos por impregnação em fase gasosa apresentam rendimento em torno 15%.Palavras-chave: Compósitos carbono/carbono, Processamento, Fibras de carbono, Pirólise, Gargantas de tubeira de foguete.

Processing of thermo-structural carbon-fiber reinforced carbon compositesAbstract: The present work describes the processes used to obtain thermostructural Carbon/Carbon composites. The processing of these materials begins with the definition of the architecture of the carbon fiber reinforcement, in the form of stacked plies or in the form of fabrics or multidirectional reinforcement. Incorporating fiber reinforcement into the carbon matrix, by filling the voids and interstices, leads to the densification of the material and a continuous increase in density. There are two principal processing routes for obtaining these materials: liquid phase processing and gas phase processing. In both cases, thermal processes lead to the formation of a carbon matrix with specific properties related to their precursor. These processes also differ in terms of yield. With liquid phase impregnation the yield is around 45 per cent, while gas phase processing yields around 15 per cent. Keywords: Carbon-carbon composites, Processing, Carbon fibers, Pyrolysis, Rocket nozzle throat.

LISTA DE SÍMBOLOSsT Resistência à traçãoρ Massa específica%Vi Porcentagem em volume componente i%Mi Porcentagem em massa, componente i

A Área superficialV Volume livre para deposiçãoP0 Porosidade inicial da preformar0 Raio da fibra de carbonoPa Pascal = 1 Newton /m2

GPa Giga Pascal = 1 Pa x 109

MPa Mega Pascal = 1 Pa x 106Received: 20/08/09 Accepted: 06/10/09

Page 105: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009232

Pardini, L.C., Gonçalves, A.

INTRODUÇÃO

O advento da tecnologia de compósitos avançados, na década de 1940, veio trazer benefícios incontáveis a vários segmentos industriais, que se estendem da área médica até a área aeroespacial. A simples compactação de fibras de reforço sejam naturais ou sintéticas, aglomeradas com um material ligante, na forma de uma resina termorrígida formulada com endurecedores, formava materiais leves e estruturalmente adequados para uma variedade de aplicações. Vários processos de fabricação foram então implementados, adaptados e sendo incorporados à tecnologia de compósitos, como por exemplo os processos de infusão de resina (Prado, 2009).

A geometria preponderante dos compósitos é na forma de estruturas delgadas, e, portanto, os processos de fabricação almejavam, via de regra, a compactação de camadas de fibras de reforço, contínuas ou curtas. O processo de moldagem manual, com o uso de pincel para incorporação de resina às fibras é ainda hoje bastante utilizado, devido, principalmente, ao baixo investimento e ao custo de processos inerentes (Mason, 2008). A simplicidade desse processo possibilitou uma rápida demanda por esses materiais, mas os componentes obtidos apresentam desempenho mecânico modesto, limitando o emprego dos materiais obtidos dessa forma, na maioria das vezes, para finalidades estéticas. Além disso, na maioria dos casos, a cura da resina é realizada com endurecedores de cura a frio (sem ação externa de temperatura), o que limita também as propriedades mecânicas, mesmo em aplicações à temperatura ambiente.

Assim, novas tecnologias vêm sendo incorporadas ao rol de processos de moldagem de compósitos, como por exemplo, o processo de moldagem a vácuo. Nesse caso, as fibras de reforço e a matriz são dispostas sobre a superfície de um molde e, sobre esse conjunto, um filme polimérico desmoldante é posicionado, onde a compactação de camadas é realizada pela ação de uma bomba de vácuo, que possibilita que a pressão atmosférica atue como meio compactante (Prado, 2009).

O processo de moldagem a vácuo resulta em um material com desempenho mecânico melhor, quando comparado aos materiais obtidos pelo processo de moldagem manual. Os compósitos moldados a vácuo apresentam frações em volume de fibras de reforço maiores (40-50% em volume), que os obtidos pelo processo de moldagem manual (20-40%) e possibilitam, também, a eliminação de defeitos, na forma de bolhas de ar presentes, ocasionalmente, em regiões internas do material.

Os processos de prensagem uniaxial, oriundos da conformação de metais, também foram incorporados ao rol de técnicas de moldagem utilizadas na fabricação de

compósitos. Esse procedimento de moldagem possibilita o uso de resinas tanto de cura a frio, quanto de cura a quente, e os compósitos resultantes apresentam maior fração em volume de fibras, se comparados ao processo de moldagem manual e a vácuo. Uma maior fração em volume de fibras confere propriedades mecânicas melhores ao compósito. O tamanho da peça é, entretanto, limitado ao tamanho da mesa da prensa de moldagem. Embora, o processo de prensagem resulte em maior fração em volume de fibras no compósito, podem ocorrer regiões com a presença de vazios, na forma de poros (Costa, 2001). A redução do número de vazios é possibilitada pelo uso de vácuo, que torna complexo o projeto do molde.

Os processos descritos anteriormente têm limitações tanto de ordem geométrica, quanto de qualidade. A inserção dos compósitos no setor aeronáutico data da década de 1940 e, na ocasião, sua utilização em estruturas de responsabilidade estrutural (estruturas primárias) era penalizada pela baixa resistência ao cisalhamento interlaminar (< 50 MPa para compósitos epóxi/fibras de carbono) (Almeida, 1994; Costa, 2001; Mason, 2008). O advento do uso de autoclaves, conforme mostra a Fig. (1A), onde se utiliza pressão (até 1 MPa), em atmosfera inerte (N2), e vácuo simultâneos durante a moldagem, fez acrescer a resistência ao cisalhamento interlaminar para valores próximos a 70-80 MPa de compósitos bidirecionais e atende ao processamento de geometrias complexas e de grandes tamanhos. demandados pela indústria aeronáutica (Ancelotti, 2006).

O uso de hidroclaves, conforme mostra a Fig. (1B), onde a água é o vetor de pressão, possibilita obter compósitos com resistência ao cisalhamento acima de 100 MPa, resultante de uma otimização da fração em volume do reforço. Portanto, um conjunto de procedimentos e técnicas de moldagem permite que compósitos que atendam aos requisitos aeronáuticos, balizadas pela resistência mecânica, sejam obtidos (Almeida, 1994).

A corrida espacial impetrada durante o período da Guerra Fria veio demandar materiais para aplicações extremas, onde propriedades mecânicas deviam atender aos requisitos de uso em temperaturas elevadas (T> 1000oC). As ligas metálicas, na forma de aços especiais, atendiam parcialmente esse requisito, porque, a despeito da boa resistência mecânica (sT > 500 MPa), e do módulo (E > 100 GPa), apresentam alta massa específica, ρ = 7,8 g/cm3 para aços, quando em serviço, por longa duração estavam sujeitos a esforços por fluência (Bucley, 1993). O alívio de massa em sistemas e estruturas de veículos espaciais, sem penalizar propriedades mecânicas, é crucial. Foi assim que, as pesquisas foram gradativamente sendo direcionadas para a obtenção de materiais mais leves e que apresentassem resistência termomecânica condizente com as aplicações que demandassem esses requisitos. As

Page 106: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 233

Processamento de compósitos termoestruturais de carbono reforçado com fibras de carbono

técnicas de moldagem foram inicialmente adaptadas às utilizadas pelos compósitos poliméricos. Nessa classe de materiais podem ser incluídos os compósitos de matriz de carbono e compósitos de matriz cerâmica (Pardini, 2009). Para esses materiais, novos métodos de processamento foram sendo implementados. O presente trabalho tem o propósito de abordar a tecnologia de processamento de compósitos de carbono reforçado com fibras de carbono (CRFC) utilizados em aplicações aeroespaciais.

Processamento de compósitos termoestruturais

A literatura disponível sobre o processamento de compósitos termoestruturais, incluindo os compósitos CRFC e correlatos, era escassa nas décadas de 1960 e 1970, limitando-se basicamente ao estudo das matérias-primas envolvidas na fabricação dos materiais. Isso se deveu ao fato de que as aplicações desses materiais tinham, à época, conotação bélica e sensível (Schmidt, 1972). Durante a década de 1980 e 1990, com a exploração espacial se tornando um produto comercial, devido ao avanço no mercado de lançadores de satélite, a disponibilidade de informações na área teve um aumento crescente, mesmo limitando-se a comparações entre processos já estabelecidos para esses materiais.

De forma geral, o limite de operação de um componente é ditado pela temperatura em que o mesmo foi processado. Os polímeros, por exemplo, são processados durante a síntese, e também no processo de moldagem, em temperaturas que podem atingir até 450oC, enquanto a temperatura de obtenção de materiais cerâmicos pode atingir níveis da ordem de 1800oC. Assim, em fases críticas envolvidas no processamento desses materiais, para a obtenção de materiais de uso em altas temperaturas, utiliza-se, via de regra, fornos de alta temperatura (Otani, 1996; Gonçalves, 2008).

As aplicações de compósitos termoestruturais baseados em compósito carbono/carbono são destinadas a escudo térmico, material ablativo, elementos de fricção, componentes de vetoração, gerenciamento de energia, entre outras. Dentre esses usos, destaca-se a utilização em gargantas de tubeira de foguete a propelente sólido e câmaras de combustão de propelentes líquidos. As características que essas aplicações demandam são isolamento, baixa massa específica, desgaste controlado, dissipação térmica controlada, baixo coeficiente de expansão, coeficiente de atrito, e permitem que o material se comporte como reservatório de calor (Lamicq, 1981; Savage, 1993).

Os compósitos CRFC são formados pela utilização de fibras de carbono e matrizes carbonosas, estas formadas essencialmente pelo elemento carbono. O diagrama esquemático da tecnologia envolvida na obtenção desses materiais é mostrado na Fig. (2).

Figura 1: Autoclave (A) e Hidroclave (B) utilizados em pro-cessos de moldagem de compósitos poliméricos.

linha de vácuo

termopar peça

membrana de borracha

ferramental

sistema pressurização

sistema resistivo de aquecimento

(B)

bomba vácuo

ventilação

pulmão

medidor de vácuo

linha de vácuo

base do molde

termopar

peça sob bolsa de vácuo

engates rápidos

(A)

Figura 2: Diagrama esquemático simplificado das etapas de processamento de compósitos carbono reforçado com fibras de carbono.

Piche

Fibras de Carbono

Reforço (2D, 3D, nD)

C ompósito de Carbono Reforçado com Fibras de Carbono

Hidrocarboneto gasoso

Resina

Impregnação Infiltração química em fase gasosa

Carbonização

Usinagem

Carbonização

Grafitização

refo

rços

m

atriz

pr

ecur

sora

proc

essa

men

to pr

odut

o

Page 107: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009234

Pardini, L.C., Gonçalves, A.

O reforço de fibras de carbono, similarmente aos compósitos poliméricos, suporta os carregamentos mecânicos, direciona a condutividade térmica e mantém a integridade estrutural de estruturas, devido ao seu baixo coeficiente de expansão térmica (1.10-6 oC-1). As matrizes carbonosas podem ser oriundas de resinas termorrígidas (principalmente resinas fenólicas), piche de petróleo, piche de alcatrão de hulha ou pela decomposição de gases orgânicos (metano e propano, por exemplo). As resinas termorrígidas são convertidas em carbono por processos de pirólise em fase sólida (Bento, 2004).

Nesse caso, o carbono residual resultante do processo de pirólise não é influenciado pelas condições de processamento, exceto pela taxa de aquecimento, e o material tem características vítreas, o que é um fator inconveniente por diversas razões, sendo a principal delas pelo impedimento de grafitização do mesmo. A escolha do precursor carbonoso vai determinar o tipo de processo a ser utilizado na manufatura do material. Por outro lado, a pirólise em fase líquida é conduzida pelo uso de piches, e produz materiais carbonosos denominados de “coques moles”. Esses materiais são resultantes do escoamento e alinhamento simultâneo de macromoléculas, que por sua vez se arranjam e são ordenadas, gerando planos basais grafíticos empilhados e bem orientados (Griffiths, 1981; Rand, 1993).

Os materiais de carbono obtidos a partir de resinas termorrígidas, à base de resinas fenólicas, têm massa específica de, aproximadamente, 1,50 g/cm3, enquanto que materiais de carbono obtidos a partir de piches apresentam massa específica maior que 1,9 g/cm3. Tanto o processamento de compósitos CRFC com matriz à base de resinas termorrígidas, quanto o que se utiliza de piches são processos de impregnação em fase líquida onde o substrato poroso é formado de fibras de carbono (Fitzer, 1987; Gonçalves, 2008). O outro processo denominado infiltração em fase gasosa (CVI) refere-se à impregnação por meio de gás, que contém carbono em sua molécula, elemento que se decompõe no substrato poroso de fibras de carbono. Esse processo, além de com plexo em seu controle, demanda um longo tempo, porque o preenchimento completo dos vazios ao redor das fibras baixas requer temperaturas de processo na faixa de 900-1000 ºC, para evitar que a etapa preponderante de deposição seja controlada por difusão (Becker, 1998; Delhaès, 2002).

Reforços e Preformas

Os compósitos estruturais modernos foram concebidos, inicialmente, a partir do uso de matrizes associadas a camadas empilhadas de reforço, geralmente tecidos e fitas unidirecionais. Assim, as propriedades no plano de reforço, devido ao empilhamento puro e simples de

camadas (lâminas de reforço) restringem as aplicações desse tipo de compósito a componentes delgados, devido à limitada resistência do ma terial fora do plano de reforço (Levy, 2006). A obtenção de compósitos com geometrias maciças só é possível com o uso de preformas, advindas de uma arquitetura de fibras multidirecionais. Essa solução tecnológica propiciou uma distribuição mais uniforme de propriedades termomecânicas ao material (Lachman, 1978; Pardini, 2000). Além disso, a tenacidade à fratura e a resistência ao cisalhamento superam valores obtidos para compósitos laminares. São inúmeras as variações possíveis de reforço multidirecional, desde as mais complexas como pentadirecional (5D) e tetradirecional (4D), apresentadas na Fig. (3a) e Fig. (3b), até as mais simples, como as estruturas tridirecionais ortogonais (3D), mostrada na Fig. (3c). A disposição de reforços em direções predeterminadas pode ser efetuada tanto a partir do uso varetas unidirecionais pultrudadas, utilizando um gabarito, como pelo uso de fibras secas, usando equipamentos automatizados. O processo de pultrusão permite obter, de forma contínua, peças em compósito com geometria de seção transversal definida, sendo as formas circulares e sextavadas as mais utilizadas. As propriedades térmicas, mecânicas, ablativas e de resistência à erosão do produto (compósitos CRFC) vão definir qual o tipo de preforma adequado à aplicação que se vislumbra (Gonçalves, 2008).

Figura 3: Preformas multidirecionais utilizadas na manufatura de compósitos termoestruturais. (A) Tetradirecional (4D) Piramidal, (B) Tetradirecional 0/60 Planar, (C) Tridirecional (3D) ortogonal (Pardini, 2000).

(A) (B)

(C)

x

y x

Um parâmetro importante a se considerar inicialmente na concepção de uma preforma é o volume ocupado pelo reforço, seja esse na forma de varetas pultrudadas ou mesmo na forma de fibras secas. Como as fibras, ou

Page 108: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 235

Processamento de compósitos termoestruturais de carbono reforçado com fibras de carbono

varetas são dispostas regularmente e se repetem no volume da estrutura, pode-se caracterizar a estrutura repetitiva por uma célula unitária.

O volume de varetas, ou fibras de reforço, em uma preforma, ou estrutura de fibras é calculado pela Eq. (1) (Levy, 2006).

%Vi =%Mi .ρpref.

ρi (1)

onde:%Vi = % em volume do componente i, no compósito (ou

preforma);%Mi = % em massa do componente i, no compósito (ou

preforma);ρpref. = massa específica aparente da preforma (ou

compósito);ρi = massa específica aparente do componente

(preforma ou compósito);

Se for considerada, por exemplo, as preformas da Fig. (3) constituídas de varetas circulares pultrudadas, tendo diâmetro de 2 mm e manufaturadas com fibras de carbono e resina fenólica, a fração em volume de varetas é de 50%. Se estas mesmas preformas forem pirolisadas, a fração em volume de fibras para as preformas 3D ortogonal e 4D 0/60 planar será de 30% (Pardini, 2000). Nas mesmas condições para a preforma 4D piramidal, o volume de varetas pultrudadas é de 68%, e após a pirólise o volume de fibras de carbono é de 45%. Tipicamente, varetas moldadas com resina fenólica e fibras de carbono têm massa específica de, aproximadamente, 1,55 g/cm3, e 55% em volume de fibras de carbono (Pardini, 2000).

Processamento Via Fase Líquida

O carbono não funde e não é sinterizável, exceto a pressões e temperaturas elevadas e com matérias-primas especiais, sendo impraticável a obtenção desse material por meios que utilizem tais processos (sinterização ou fusão). Uma das alternativas viáveis para obtenção de carbono, via fase líquida, é através da pirólise, em atmosfera inerte, de materiais orgânicos, como resinas termorrígidas e piches. A pirólise de compostos de materiais orgânicos para formação de matriz de carbono tem sido uma das rotas mais utilizadas para obtenção de compósitos de carbono reforçados com fibras de carbono (CRFC). Os compósitos CRFC são uma classe de materiais de engenharia, que aliam as vantagens da elevada resistência e rigidez específicas das fibras de carbono com as propriedades refratárias da matriz de carbono, permitindo que o material apresente, dentre outras, boas resistências à ablação e ao choque térmico, adequada resistência mecânica, elevada rigidez e inércia química, elevadas condutividades térmica e elétrica

e baixa massa específica. A utilização do tipo de precursor, sólido no caso de uma resina termorrígida, ou líquido, no caso de piche, para a formação do carbono da matriz, define o tipo de processo a ser utilizado na manufatura do compósito CRFC. Em qualquer circunstância, o reforço de fibras de carbono, ou preforma, passa por um processo inicial de impregnação, que favorece a fixação da geometria da peça a ser manufaturada (Gonçalves, 2008).

As técnicas convencionais de fabricação de compósitos de matriz carbonosa se baseiam na utilização de prensagem a quente, injeção ou extrusão, onde peças são obtidas pela aglomeração/compactação de partículas. Como exemplo disso, pode-se citar os grafites sintéticos, um caso típico de compósito com reforço particulado, onde as partículas de coque são o reforço e o piche é a fase, ou matriz, ligante. Durante o processamento, as fases (reforço e matriz) são submetidas a altas temperaturas e pressão simultaneamente, para conversão do material em carbono. Entretanto, essa técnica não é conveniente para processamento onde o reforço é constituído de fibras longas, devido às limitações de tamanho do componente a ser moldado (tamanho dos equipamentos de prensagem a quente). So ma-se a isso o fato de que durante o processo de consolidação as fibras podem ser danificadas por esmagamento e ruptura, devido aos esforços compressivos durante a aplicação de pressão.

Como afirmado anteriormente, a pirólise em fase sólida de resinas termorrígidas, como as resinas fenólicas, gera carbonos não grafitizáveis, e, consequentemente, as propriedades termomecânicas não são satisfatórias para a maioria das aplicações. Por outro lado, a pirólise em fase líquida de piches, muito embora resulte em carbonos altamente orientados e grafitizáveis e com melhores propriedades termomecânicas, tem como inconveniente a necessidade de ser efetuada a altas pressões, tendo em vista que o rendimento em carbono de piches é função da pressão de processo, conforme mostra a Fig. 4 (Lachmann, 1978; Savage, 1993). Para matrizes termorrígidas não há modificação no rendimento final em carbono com alteração da pressão de pirólise.

Figura 4: Rendimento em carbono de piches em função da pressão de carbonização, para pirólise a 1000oC (Lachmann, 1978; Savage,1993)

0,01

0,1

1

10

100

1000

10000

50 60 70 80 90 100

Rendimento em Carbono (%)

Pre

ssão

de

Piró

lise

(MP

a)

Page 109: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009236

Pardini, L.C., Gonçalves, A.

Os piches contêm cerca de 70-80% em massa de carbono em sua composição, e durante o processo de pirólise a perda de voláteis, a temperaturas em torno de 1000ºC e pressão de 0,1 MPa, promove um rendimento em carbono fixo final de apenas 50%. Em geral, o rendimento em carbono após o processo de pirólise é resultado dos seguintes fatores (Rand, 1993): (1) baixo conteúdo de carbono dos materiais impregnantes, (2) escoamento prematuro (exsudação) do material impregnante dos poros da peça durante o processo de pirólise do compósito, (3) baixa pressão durante o processo de impregnação/pirólise do compósito, (4) taxas de aquecimento muito elevadas, que impedem o aquecimento uniforme da peça, aumentando os efeitos de evolução de gás do material impregnante sob pirólise. Porém, o incremento na pressão durante o processo de pirólise para 100 MPa, resulta em rendimentos da ordem de 80% (Sohda, 1999).

Os rendimentos em carbono em massa obtidos a partir da pirólise de piches são praticamente equivalentes, independentemente da pressão utilizada na pirólise, enquanto os rendimentos em volume são significativamente diferentes, conforme é apresentado na Tab. 1, considerando a massa específica do piche ρ = 1,33 g/cm3 e a massa específica do coque (600oC) de ρ= 2,15 g/cm3 (Sohda, 1999).

Por exemplo, se for considerada a eficiência de impregnação de 100%, as massas específicas do piche e do coque (2500oC), o gráfico da Fig. 4 e os resultados da Tab. 1, o rendimento volumétrico de densificação será de ~45%.

A solução tecnológica vigente para o processamento de compósitos CRFC é, então, realizar as etapas de impregnação/pirólise em equipamentos que possam atingir níveis de pressão elevados, como o representado esquematicamente na Fig. (5), sempre em atmosfera inerte para evitar a oxidação do material. O equipamento opera em pressão isostática, onde a peça é posicionada dentro de um sistema de aquecimento interno ao forno. Pressão isostática é utilizada para manter toda a peça envolta sob uma pressão fixa.

Tabela 1: Rendimentos em massa e volume em função da pressão de carbonização de piches.

Pressão de carbonização

(MPa)

Rendimento em carbono %/

massa

Rendimento em carbono %/

volume

1 72 30

100 75 55

O rendimento, ou eficiência de densificação volumétrica (ΔV/Vv), ou seja, a razão entre o volume de matriz carbonosa e o volume de porosidade disponível para densificação, pode ser obtido pela Eq. (2) (Rellick, 1990).

ΔVV

v

= Ym

YI

ρo

ρi

(2)

onde :ΔV = fração em volume de matriz de carbono

“incorporada” na fração volumétrica de vazios no compósito, na primeira fase de densificação (q).

Ym = rendimento em massa do impregnante (%),YI = eficiência de impregnação (0 – 1/0 – 100%),ρo = massa específica inicial da matriz (impregnante)

(g/cm3), eρi = massa específica da matriz na temperatura de

tratamento térmico final (g/cm3).

Figura 5: Representação esquemática de equipamento de pren-sagem isostática a quente para manufatura de Com-pósitos CRFC.

O uso de pressão isostática durante o processo de pirólise aumenta o rendimento em carbono de piches poliaromáticos, mas como a conversão não é completa, é imprescindível a realização de ciclos de impregnação/pirólise subsequentes para atingir uma massa específica, que pode atingir até 1,95 g/cm3, para aplicações termoestruturais. A parte interna do vaso é constituída de um elemento resistivo isolado do vaso por uma barreira térmica, que o mantém a temperaturas da parede externa próximas da temperatura ambiente. A pressão é transmitida à peça por intermédio de um meio gasoso inerte (hélio, argônio ou nitrogênio). O equipamento de prensagem isostática a quente é dotado de sistemas auxiliares de suprimento de gases, compressores, e controladores de fluxo para o sistema de pressurização, e de controladores programáveis de temperatura para o ciclo térmico. Assim, no caso de preformas multidirecionais, os interstícios do reforço (varetas pultrudadas ou fibras), vão sendo preenchidos continuamente, e, consequentemente, ocorre aumento na massa especifica do produto. Ao final do processo poros e microtrincas remanescentes representam cerca de 5% em volume, que é o limite máximo aceitável para aplicações destinadas a gargantas de tubeiras e proteções térmicas de reentrada atmosférica.

Page 110: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 237

Processamento de compósitos termoestruturais de carbono reforçado com fibras de carbono

O processamento só se completa submetendo o compósito CRFC a processos térmicos a temperaturas superiores a 1700oC, sendo condições ideais a 2500-2800oC. O processo de tratamento térmico nessa faixa de temperatura é denominado, para carbonos, de grafitização. O processo de grafitização, conforme mostra esquematicamente a Fig. 6, leva ao desenvolvimento de ordenamento cristalográfico em materiais de carbono, aproximando-se da estrutura ideal do cristal de grafite. As propriedades termofísicas, como condutividade térmica e condutividade elétrica, mudam com o processo de grafitização, fazendo com que o material torne-se melhor condutor elétrico e térmico, conforme mostra a Tab. 2 (Obelin, 2001).

Para o processo de grafitização podem-se utilizar fornos em vaso fechado, que demandam uso de atmosfera inerte de hélio, ou fornos tipo Castner, derivados do conceito de Acheson, onde o tratamento térmico da peça utiliza-se do efeito Joule. O processo que utiliza vaso fechado pode ter custo maior em virtude do uso contínuo de gás inerte. Já, no caso do forno tipo Castner, a própria peça é o elemento resistivo, conforme mostrado esquematicamente na Fig. 6, onde é empacotada e isolada do ambiente externo.

O ciclo total de processamento, constituído das etapas de impregnação, pirólise a 1000oC e grafitização pode ter duração total de 150-200 horas. Pode-se calcular o perfil de incremento de massa específica de preformas multidirecionais, considerando-se inicialmente a fração volumétrica de vazios da mesma e o rendimento em carbono da matriz impregnante. A matriz de piche impregnante tem massa específica de 1,3 g/cm3. Quando pirolisadas até 2500oC atingem massa específica de 2,15 g/cm3. Considerando-se os parâmetros mencionados anteriormente, e utilizando a Eq. (1), pode-se estimar o incremento da massa específica em função das etapas do processo de adensamento, onde a incorporação de matriz carbonosa é efetuada em etapas. Pode-se considerar, para efeito de exemplo, as preformas tridirecionais (3D ortogonal) e 4D Planar, e tratamento térmico até 2500oC. Considerando-se pirólises efetuadas a 0,1 MPa (rendimento em carbono residual de 50%), e 100 MPa (rendimento em carbono residual de 85%), conforme Fig. 4, e que os vazios remanescentes após os processos de impregnação/pirólise sejam preenchidos totalmente em etapas subsequentes. Os gráficos da Fig. 7 e da Fig. 8 mostram o incremento de massa específica nas condições estipuladas. Obviamente, o processamento, quando realizado a pressões de 100 MPa, resulta em materiais com maior massa específica que os realizados a 0,1 MPa. Os gráficos mostram ainda que são necessários de 5 a 7 ciclos completos de impregnação/pirólise para atingir a massa específica máxima possível.

Tabela 2: Influência do processo de grafitização nas propriedades intrínsecas de compósitos CRFC.

Propriedade Aumento DecréscimoResistividade elétrica XMassa específica XResistência mecânica XCoeficiente de expansão térmica

X

Porosidade XMódulo elástico XCondutividade térmica X

Figura 6: Representação esquemática de forno tipo Castner para processo de grafitização de carbono, e representação de mudanças na microestrutura amorfa para estrutura cristalina, que ocorrem durante as fases de tratamento térmico (Kuznetsov, 2000; Griffiths, 1981).

pacote isolante térmico

eletrodo

isolamento térmico

Figura 7: Incremento de massa específica em função das eta-pas de impregnação/pirólise (IP) de uma preforma 3D ortogonal 2:2 , com massa específica inicial da preforma de 0,58 g/cm3 , para rendimentos em coque de 85% (pressão 100 MPa) e 50% (pressão 0,1 MPa).

1-IP0,50,60,70,80,9

11,11,21,31,41,51,61,71,8

100 MPa

0,1 MPa

mas

sa e

spec

ífica

(g/c

m3 )

2-IP 3-IP 4-IP 5-IP 6-IP 7-IP

etapas de processo

Page 111: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009238

Pardini, L.C., Gonçalves, A.

Alta massa específica (>1,80 g/cm3) é requisito mandatório para estruturas termoestruturais, como gargantas de tubeiras de foguete e proteções térmicas, onde demandas termomecânicas extremas devem ser atendidas, como por exemplo, resistência à erosão.

Processamento Via Fase Gasosa

O processamento de compósitos CRFC via fase gasosa envolve, de forma simplista, a deposição de carbono advindo de um gás hidrocarboneto, sob condições adequadas de temperatura, pressão e fluxo de gás, no substrato de reforço, seja ele na forma de fibras de carbono ou na forma de uma preforma multidirecional. O processo é assim denominado infiltração (ou deposição) química em fase gasosa, ou vapor. Na literatura o processo foi cunhado de CVD (chemical vapor deposition) ou CVI (chemical vapor infiltration). Diferentes tipos de reatores do processo de infiltração química em fase gasosa têm sido objeto de pesquisa, tanto na comunidade acadêmica, quanto na área industrial. Dentre estes, pode-se citar o processo CVI isotérmico (I-CVI), o processo CVI gradiente de temperatura (GT-CVI), o processo CVI isotérmico de fluxo forçado (IF-CVI),e o processo CVI de pressão pulsada (P-CVI). Todos esses processos têm demonstrado viabilidade para produção de compósitos CRFC (Li, 2000; Delhaès, 2005; Zhang, 2003).

As considerações teóricas relativas a físico-química envolvida nos processos CVD/CVI não são triviais de entendimento, sendo objeto de intensa pesquisa desde o emprego dessa tecnologia para compósitos termoestruturais na década de 1950. Isso se deve ao fato de que vários fatores, como por exemplo a pressão, razão de diluição

do gás fonte, temperatura, fluxo total e arquitetura da preforma, influenciam simultaneamente o processamento e a microestrutura final do compósito obtido. Os principais gases precursores da matriz de carbono que podem ser utilizados neste processo são metano, propileno, propano, e gás natural, mas líquidos vaporizáveis, como benzeno, querosene e hexano também podem ser opções a serem consideradas (Rovillain, 2001; Beaugrand, 2001; Delhaès, 2005). Os processos em fase gasosa não são somente influenciados pelos parâmetros clássicos de deposição química em vapor, mencionados anteriormente (gás precursor, pressão da câmara de reação e fluxo de gás, diluente e concentração de diluente, temperatura e tempo de residência, cinética de decomposição do gás fonte), mas também pelo parâmetro macroscópico repre sentado pela razão de área superficial (A) em relação ao volume livre para deposição (V), representada pela Eq. (3) (Hüttinger, 1990; Chen, 2007). Assim, as interações da fase gás homogênea, representada pelo fluxo de gás, e as reações heterogêneas superficiais que ocorrem no reforço, são controladas pela razão A/V (Zhang, 2003). Em geral, para valores menores de A/V, reações homogêneas na fase gás são favorecidas e para valores altos de A/V reações heterogêneas superficiais dominam.

AV

=área superficialcumulativa de poros por grama

volume cumulativo de poros por grama (3)

A razão macroscópica A/V também pode ser obtida pela Eq. (4), onde P0 e r0 são a porosidade inicial da preforma e o raio da fibra de carbono (~4 mm) ou da vareta de reforço (1 mm ou 2 mm), respectivamente.

AV

= 21− P

o

Po

1r

o (4)

Durante o decorrer do processo de infiltração (adensamento), a área superficial de poros susceptíveis de infiltração e deposição se reduz, devido ao preenchimento destes com matriz de carbono. A relação A/V tende a aumentar durante o processo de adensamento, devido à redução de porosidade da peça e ao aumento de área superficial interna para deposição da matriz. Considerando-se, basicamente, a pressão total do sistema reator e a temperatura de processo, podem-se obter basicamente três microestruturas principais, a saber: laminar rugoso (LR), laminar liso (LL) e isotrópico (ISO). Dentre estas microestruturas a mais anisotrópica é a do tipo laminar rugoso, que tem a maior massa específica (2,0–2,1 g/cm3), devido à melhor organização nanoestrutural e pela ausência de porosidade intrínseca (Goma, 1986). Além disso, esse tipo de microestrutura é a única passível de grafitização (Delhaès, 2005), ou seja, evoluem para uma fase termodinamicamente estável de grafite hexagonal. Embora seja difícil o controle do processo de manufatura de compósitos CRFC via fase gasosa,

Figura 8: Incremento de massa específica em função das etapas de impregnação/pirólise (IP), e etapas intermediárias de grafitização, de uma preforma 4D 0/60 planar (2:2) , com massa específica inicial da preforma de 0,55 g/cm3, para rendimentos em coque de 85% (pressão 100 MPa) e 50% (pressão 0,1 MPa).

1-IP

0,50,60,70,80,9

11,11,21,31,41,51,61,71,81,9

mas

sa e

spec

ífica

(g/c

m3 )

2-IP 3-IP 4-IP 5-IP 6-IP 7-IP

etapas de processo

Page 112: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 239

Processamento de compósitos termoestruturais de carbono reforçado com fibras de carbono

objetivando a obtenção de uma única microestrutura, devido à uniformidade do ambiente reacional, é desejável que o mesmo seja conduzido de modo a se obter a estrutura laminar rugosa (Farhan, 2007).

Na indústria, o processo isotérmico de adensamento de compósitos CRFC é o mais utilizado, porque permite volumes de produção compatíveis com a complexidade do processo. Entretanto, em condições isotérmicas, a taxa de deposição é lenta, podendo variar de 0,0010 a 0,0025 g/cm3.h. Para atingir valores de massa específica próximos a 1,85 g/cm3 demandam-se períodos de tempo de 600 a 1200 horas de deposição para o completo adensamento do substrato poroso (Li, 2000). Nessas condições a eficiência do processo de adensamento por infiltração química em fase gasosa é próximo de 20%, considerando o período total de deposição, o fluxo de gás e a quantidade de carbono depositada no substrato de reforço (Li, 2008).

O equipamento básico de processamento de compósitos CRFC por infiltração química em fase gasosa é constituído de reatores metálicos operados a vácuo (~15 kPa), seja qual for a variante de operação do sistema. A Fig. 9 mostra duas variantes do processo, um isotérmico e outro gradiente térmico. A operação do reator sob vácuo permite evitar que a etapa principal do processo seja controlada por difusão dos gases na estrutura de reforço. Além disso, é desejável manter tanto a temperatura quanto a pressão tão baixas quanto possível para prevenir nucleação homogênea na fase gasosa e a formação de fuligem. O aquecimento das peças (preformas) tanto pode ser efetuado, no caso de equipamentos, que operam de forma isotérmica, por meio resistivo como por indução. A escolha de um ou outro sistema de aquecimento depende das condições operacionais, dos custos envolvidos no processo e da solução tecnológica adequada ao processo.

Os sistemas periféricos da unidade de processamento incluem ainda unidades de controle de gás no reator por fluxímetros de massa e sistemas de condensação de gases de rejeito de processo (Daws, 2003).

Devido ao longo tempo de processamento e a baixa eficiência do processo, os custos de produção para compósitos CRFC obtidos por meio do processo de infiltração química em fase gasosa são significativamente superiores, podendo chegar a um custo de produção dez vezes maior que os obtidos pelo processo via líquida. Entretanto, o investimento inicial em instalações é menor que os processos via fase líquida.

Embora, a matriz de carbono pirolítico tenha comprovadamente melhores propriedades termomecânicas, quando comparada a matriz de coque, oriunda do processamento de piches na variante do processo via fase líquida, algumas inconveniências relacionadas ao processamento via fase gasosa indicam que na indústria é mais favorável utilizá-lo quando o componente estrutural desejado é uma peça delgada (Goma, 1986). Dentre essas inconveniências pode-se citar a não homogeneidade de massa específica ao longo da peça, a complexidade dos mecanismos físico-químicos de deposição via fase gasosa, o que implica em rígidos controles de processo. Aliado a este fato, os processos via deposição/infiltração têm alto custo de produção, cuja estimativa chega a dez vezes o custo de produção via processo em fase líquida, considerando massas específicas equivalentes obtidas por um ou outro processo.

Prevalecem para todos os processos discutidos no presente trabalho os conceitos fundamentais de Kotlensky (Kotlensky, 1973), conforme mostra a Fig. 10, que definem as topologias de densificação atribuídas a cada tipo de matriz. Um modelo de poros interconectados é apresentado. O preenchimento desses poros (impregnação) por matriz de piche e posterior tratamento térmico resulta em carbonos “moles”, que apresentam encolhimento, devido à perda de massa. Espaços vazios remanescentes ainda são presentes nos poros parcialmente preenchidos e podem ser submetidos a um novo processo de impregnação, levando ao adensamento do material. De maneira similar, a impregnação com matriz termorrígida preenche poros e após o processo de tratamento térmico resulta em carbonos “duros”. A diferença em relação ao processo de impregnação com piche é que o processo de cura da matriz termorrígida é acompanhado de encolhimento. No caso em que o processo de adensamento é efetuado por CVD/CVI, a deposição de carbono pirolítico pode ocorrer nas paredes dos poros. Condições de processamento devem ser ajustadas com rigor para minimizar o bloqueio prematuro de poros, o que pode resultar em materiais com baixa massa específica.

Figura 9: Diagrama esquemático de sistemas de deposição/in-filtração química em fase gasosa para produção de compósitos CRFC, por meio isotérmico/isobário (A) e por gradiente térmico (B).

( A )

gás fonte

T2 < T1 ( B )

gás fonte

preforma

susceptor

Page 113: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009240

Pardini, L.C., Gonçalves, A.

Epoxy Composites”, MSc. Thesis, Technology Institute of Aeronautics, São José dos Campos, S.P.

Becker, A., Hüttinger, J., 1998, “Chemistry and Kinetics of Chemical Vapor Deposition of Pyrocarbon – III Pyrocarbon Deposition from Propylene and Benzene in the Low Temperature Regime, Carbon, Vol. 36, Nº.3, pp. 201-211.

Bento, M.S., 2004, “Estudo Cinético da Pirólise de Precursores de Materiais Carbonosos”, Tese de Mestrado, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP, 213p.

Beaugrand, S., David, P., Bruneton, E., Bonnamy, S., 2001, “Rapid Densification of Carbon-Carbon Composites by Film-boiling Process, Carbon, 21.5.

Buckley, J.L., Edie, D.D., 1993, “Carbon-Carbon Materials and Composites” – 1º edition, Noyes Publication.

Chen, J.X., Xiong, X., Huang, Q.Z., Yi, M.Z., Huang, G.B.Y., 2007, “Densification mechanism of chemical vapor infiltration technology for carbon-carbon composites”, Trans. Nonferrous Met. Soc-China, Vol. 17, pp. 519-522.

Costa, M.L., Rezende, M.C., Almeida, S.F.M., 2001, “The influence of porosity on the interlaminar shear strength of carbon/epoxy and carbon/bismaleimida fabric laminates”, Composite Science and Technology, Vol. 61, pp. 2101-2108.

Daws, D.E., Rudolph, J.W., Zeigler, D., Bazshushtari, A., 2003, “Hardware Assembly for CVI/CVD Processes, US Patent 6,669,988.

Delhaès, P., 2002, “Chemical Vapor Deposition and Infiltration Processes of Carbon Materials “, Carbon, Vol. 40, pp. 641–657.

Delhaès, P., Trinquecoste, M., Lines, J.F., Cosculluela, A., Goyhénèche, J.M., Couzi, M., 2005, “Chemical vapor infiltration of C/C composites: Fast densification processes and matrix characterizations”, Carbon, Nº. 43, pp. 681-691.

Farhan, S., Li, K.Z., Guo, L.J., 2007, “Novel thermal gradient chemical vapor infiltration process for carbon-carbon composites, New Carbon Materials, Vol. 22, Nº. 3, pp. 247-252.

Fitzer, E., 1987, “The Future of Carbon-Carbon Composites”, Carbon, Vol. 25, Nº. 2, pp. 163-190.

Goma, J., Oberlin, A., 1986, “Carbon – Characterization of low temperature pyrocarbons obtained by densification of porous substrates, Carbon, Vol. 24, Nº. 2, pp.135-142.

Figura 10: Representação esquemática do mecanismo de preenchimento de poros nos processos de densifi-cação de compósitos CRFC (Kotlensky, 1973).

CONCLUSÕES

No projeto e no processamento de compósitos termoestruturais, e particularmente no processamento de compósitos CRFC, deve-se compatibilizar as metodologias existentes, ou seja, processamento via fase líquida e via fase gasosa. Assim, cada processo deve se adaptar ao projeto do componente que se deseja manufaturar, podendo, em qualquer etapa de processamento, optar-se por uma ou outra variante (processo via fase líquida ou via fase gasosa) para se obter o compósito CRFC. Essas possibilidades implicam em que uma imensa variedade de materiais e componentes pode ser obtida. Por exemplo, freios de aeronaves podem ser inicialmente moldados com matriz resina termorrígida (fenólica) e, posteriormente, serem submetidos a um processo de adensamento pela utilização de infiltração química em fase gasosa. Por outro lado, estruturas mais espessas como gargantas de tubeiras de foguete ou proteções térmicas adjacentes a estas são obtidas por processamento em fase líquida, pela utilização, na maioria dos casos, de matrizes oriundas de piches e/ou matriz de resina termorrígida (fenólica). É importante que durante o processamento de compósitos CRFC seja obtido o máximo de rendimento em carbono, após o processo de pirólise, do material utilizado como precursor da matriz. Evita-se, assim, um número excessivo de ciclos de reimpregnação, possibilitando a redução do tempo de processo e redução de custo.

REFERÊNCIAS

Almeida, S. F M., Nogueira Neto, Z. S., 1994, “Effect of Void Content on the Strength of Composites Laminates”, Composite Structures, Vol. 28, pp. 139-148.

Ancelotti Jr., A.C., 2006, “Effects of porosity on the Shear Strength and Dynamic Properties of Carbon Fiber/

Page 114: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 241

Processamento de compósitos termoestruturais de carbono reforçado com fibras de carbono

Gonçalves, A., 2008, “Caracterização de Materiais Termoestruturais a base de Compósitos de Carbono Reforçados com Fibras de Carbono (CRFC) e Carbonos Modificados com Carbeto de Silício (SiC), Tese de Doutorado, Instituto Tecnológico de Aeronáutica, São José dos Campos, SP, 226p.

Griffiths, J.A., Marsh, H., 1981, Proceedings of 15th Biennial Conf. on Carbon, University of Pennsylvania, Philadelphia, USA, 22-26 June.

Hüttinger, K.J., 1990, “Theoretical and practical aspects of liquid-phase pyrolysis as basis of the carbon matrix of CFRC”, Proceedings of the NATO Advanced Study Institute on Carbon Fibers and Filaments, INCONNU (1989), Vol. 177, pp.301-325.

Kotlensky, W.V., 1973, “Deposition of Pyrolytic Carbon in porous Solids”, Chem Phys Carbon, 9, 173, CRC Press.

Kuznetsov, D.M., 2000, “Shrinkage Phenomena in Graphitization of Preforms in Castner Furnaces”, Refractories and Industrial Ceramics, Vol. 41, Nº. 7 - 8.

Lachmann, W. L., Crawford, S. A., McAllister, L. E., 1978, “Multidiretionally Reinforced Carbon-Carbon Composites”, Proceedings of Int Conf on Composite Materials, B. Noton, R. Signorelli, K. Street and L. Phillips, Ed. Metallurgical Soc of AIME, pp. 1302-1319.

Lamicq, P., Macé, J., Pérez, B., 1981, “4D Carbon/Carbon High Temperature Thermal Evaluation”, Proceedings of 15th Biennial Conference on Carbon, Philadelphia, USA, pp. 528-529.

Levy, F. N., Pardini, L. C., 2006, “Structural Composites: Science and Technology” (in Portuguese), Ed. Edgard Blucher, São Paulo, 313p.

Li, H.J., Hou, X.H., Chen, Y.X., 2000, “Densification of Unidirectional Carbon–Carbon Composites by Isothermal Chemical Vapor Infiltration”, Carbon, Vol. 38, pp. 423–427.

Li, J., Luo, R., 2008, “Kinetics of chemical vapor infiltration of carbon nanofiber-reinforced carbon/carbon composites”, Materials Science and Engineering, A 480, pp. 253–258.

Mason, K.F., 2008, “Autoclave quality outside the autoclave?” High Performance Composites, Mar. 2006, Available in: http://www.compositesworld.com/articles/ autoclave-quality-outside-the-autoclave.aspx. Access on 10 Nov. 2008.

Oberlain, A., Bonnamy, S., 2001, “Carbonization and Graphitization”, in Graphite and Precursors, Ch. 9, ed. by Pierre Delhaès, Gordon and Breach Science Publ.

Otani, S., 1996, “Study of the Influence of the characteristics of coal-tar pitches on the obtention of Carbon/Carbon Composites”, Doctorate Thesis, University of São Paulo, Dept. of Chemical Engineering, São Paulo, S.P., 160p.

Pardini, L.C., 2000, “Preformas para Compósitos Estruturais”, Polímeros Ciência e Tecnologia, Vol. 10, Nº2, pp. 100-109.

Prado, V. J. S., 2009, “Molding of Composites by the Resin Infusion Process: Property Correlation”, MSc. Thesis, Technology Institute of Aeronautics, São José dos Campos, SP.

Pardini, L.C., Gregori, M.L., 2009, “Modeling Elastic and Thermal Properties of 2.5D Carbon Fiber C/SiC Hybrid Matrix Composites by Homogenization Method” Proceedings of 6th European Workshop on Thermal Protection Systems and Hot Structures, 1-3 April 2009, Stuttgart, Germany.

Rand, B., 1993, “Matrix Precursors for Carbon-Carbon Composites”, in Essentials of Carbon-Carbon Composites, Chap.3, Royal Soc. Chemistry, London, UK.

Rellick, G., 1990, “Densification Efficiency of Carbon-Carbon Composites, Carbon, Vol. 28, Nº. 4, pp. 589-594.

Rovillain, D., Trinquecoste, M., Bruneton, E., Derre, A., David, P., Delhaes, P., 2001, “Film boiling chemical vapor infiltration: An experimental study on carbon/carbon composite materials”, Carbon, Vol. 39 pp.1355–1365.

Savage, G., 1993, “Carbon-Carbon Composites”, Chapman & Hall, London, UK.

Sohda, S., Shinagawa M., Ishii, M., 1999, “Effect of carbonization pressure on carbon yield in a unit volume”, Composites: Part A, Vol. 30, pp. 503-506.

Schmidt , D. L. 1972, «Carbon/carbon composites», Sampe Journal, Nº. 8, pp. 9-19.

Zhang, W. G., Hüttinger, K. J., 2003, “Densification of a 2D carbon fiber preform by isothermal, isobaric CVI: Kinetics and carbon microstructure”, Carbon, Vol. 4, Nº. 12, pp. 2325-2337.

Page 115: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 116: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 243

Claudio Antônio Federico*Institute for Advanced Studies

São José dos Campos – [email protected]

Wagner Aguiar de Oliveira Institute for Advanced Studies

São José dos Campos - [email protected]

Marlon Antônio Pereira Institute for Advanced Studies

São José dos Campos – [email protected]

Odair Lélis Gonçalez Institute for Advanced Studies

São José dos Campos – [email protected]

*author for correspondence

Avaliação da resposta de um contador do tipo “Long-Counter” para nêutrons do 241Am/BeResumo: Um detector de nêutrons do tipo “Long-Counter” está sendo utilizado dentro do contexto do projeto DREAB (Dosimetria da Radiação no Espaço Aéreo Brasileiro) para monitorar o fluxo integral de nêutrons oriundos de interação atmosférica da radiação cósmica, o qual tem importância na dose de radiação recebida por seres humanos ao nível do solo e é o principal responsável pela dose recebida por tripulações e instrumentação sensível de aeronaves. Neste trabalho são apresentados os testes preliminares efetuados com o detector de nêutrons do tipo “Long-Counter”, utilizando uma fonte de nêutrons de 241Am/Be onde foi avaliada a dependência direcional da resposta do equipamento nos sentidos radial e transversal e também foi verificada a resposta após um acréscimo de retro-blindagem no equipamento, de forma a minimizar o nódulo traseiro de resposta em relação ao nódulo frontal, condição esta de interesse para o trabalho em andamento.Palavras-chave: Nêutron, Radiação cósmica, Fluência.

Evaluation of the response of a “Long-Counter” for 241Am/Be neutronsAbstract: A “Long-Counter” neutron detector is being used in the context of the DREAB (Dosimetry of the Radiation in the Brazilian Airspace) project to monitor the integral flow of neutrons deriving from the interaction of cosmic radiation with the atmosphere, which is important for measuring radiation doses received by humans at ground level and is mainly responsible for doses received by aircraft crews and sensitive instrumentation. In this work the preliminary tests performed with the Long-Counter neutron detector are presented using a 241Am/Be neutron source where the radial and transversal directional dependence of the equipment was evaluated and also an additional backward-shield in the equipment was verified, as a way of minimizing the back nodule response in relation to the frontal nodule, which is a condition of interest for the current work in progress.Keywords: Neutron, Cosmic radiation, Fluency.

INTRODUÇÃO

O homem ao longo de sua vida está continuamente exposto aos efeitos da radiação ionizante proveniente do espaço, a qual é chamada de radiação cósmica (RC). A RC é atenuada pela atmosfera terrestre, porém, parte dela ainda atinge a superfície da terra, irradiando todos os seres vivos continuamente. A intensidade da radiação cósmica bem como sua composição e a de seus subprodutos depende da altitude, sendo que em maiores altitudes o nível de dose recebido, devido à radiação cósmica é maior do que em altitudes mais baixas, conforme pode ser observado na Fig. 1.

Received: 30/09/09 Accepted: 26/10/09

Figura 1: Taxas de dose efetiva devidas à radiação cósmica, em função da altitude (calculadas pelo programa CARI-6, para a região de São José dos Campos, SP, no período de janeiro de 2008)

Page 117: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009244

Federico, C.A. et al.

Figura 3: Arranjo experimental

Esse efeito faz com que a dose devida à radiação cósmica incidente em tripulações de aeronaves seja muito maior do que em outros grupos de trabalhadores, justificando estudos e medidas preventivas que têm se multiplicado ao redor do mundo (Hajek, 2004).

Para o caso específico de aeronaves, o principal componente da radiação cósmica responsável pela dose recebida pelas tripulações e instrumentação sensível são os nêutrons, gerados como produtos secundários de interações de prótons nos constituintes atmosféricos. Tais nêutrons são produzidos com um espectro de energia bastante amplo, variando desde nêutrons térmicos, com energia em torno de 0,025 e V, até nêutrons de centenas de MeV, o que torna a sua detecção um processo bastante complicado.

O detector do tipo “Long Couter” (LC) é utilizado para medidas de nêutrons oriundos de interação atmosférica da RC por possuir uma elevada eficiência de detecção em comparação com outros tipos de equipamento. Sua utilização se dá com a parte frontal direcionada para o zênite, de forma a maximizar a detecção de nêutrons de origem atmosférica. Neste trabalho, foram feitas medidas adicionais onde foi acrescentada uma blindagem na parte traseira do LC de forma a minimizar ainda mais a captação de nêutrons espalhados pelo solo ou estruturas adjacentes.

O LONG-COUNTER

O LC utilizado neste trabalho foi fabricado no Instituto de Estudos Avançados (IEAv) e baseia-se no modelo proposto por Slaughter (1974), com pequenas modificações. Na Figura 2, é apresentado o desenho do LC onde podem ser vistas suas estruturas principais, como as blindagens laterais de parafina borada, que possuem a função de blindar nêutrons incidentes lateralmente e as estruturas de polietileno, que possuem a função de termalizar os nêutrons incidentes, de forma que possam ser capturados no detector central (usualmente um detector proporcional do tipo 3He ou BF3). Para as medidas descritas neste trabalho foi adicionada uma capa de cádmio à face superior do detector (representada em azul na Fig. 2), que possui a função de minimizar a detecção de nêutrons térmicos oriundos do ambiente.

Detectores de nêutrons do tipo “Long-Counter” (LC) são empregados com sucesso em situações onde o espectro de energia a ser medido é bastante amplo e deseja-se uma resposta direcional.

ARRANJO EXPERIMENTAL

Para a finalidade em questão, que é a detecção de nêutrons oriundos de interação atmosférica da RC, é desejável a utilização de um monitor que tenha uma resposta bastante direcional, de forma a poder minimizar as contribuições de nêutrons espalhados ou emitidos por materiais do solo. Assim, foi montado um arranjo experimental, ilustrado na Fig. 3, onde o LC foi submetido ao fluxo de nêutrons rápidos emitidos por uma fonte, em diferentes angulações, a uma distância de 1,66m, com a finalidade de avaliar a dependência angular do LC.

Figura 2: Monitor de fluxo de nêutrons construído no IEAv (dimensões dadas em mm)

Nas medidas efetuadas a contribuição de nêutrons espalhados nos materiais presentes no ambiente (parede, piso, estruturas, etc) é grande. Para eliminar tais contribuições espúrias, é utilizada a técnica do “cone de sombra”, que consiste em efetuar medidas com a interposição de um cone absorvedor de nêutrons, construído com ferro e parafina borada (IAEA, 2000) entre a fonte e o equipamento de medida, de forma a se eliminar o feixe direto e medir unicamente os nêutrons espalhados no meio ambiente, cuja contribuição é subtraída nas medidas. Um esquema ilustrativo da utilização do cone de sombra é apresentado na Fig. 4.

A fonte de 241Am/Be emite nêutrons com uma distribuição em energia apresentada na Fig. 5. A atividade atual do 241Am presente na fonte utilizada é de 3472 MBq (maio/2009). A produção de nêutrons se dá por meio da reação 9Be(α,n)12C produzida pela interação das partículas alfa originárias do decaimento do amerício, com os núcleos dos átomos de berílio.

O cálculo da produção de nêutrons é efetuado por meio do rendimento da reação, que é de 66 n/s.MBq, resultando em uma taxa de emissão de nêutrons (t) de 2,292 x 105 n/s. A taxa de fluência de nêutrons (Φ) à uma distância d da fonte pode ser calculada por:

. (1)

Page 118: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 245

Avaliação da resposta de um contador do tipo “Long-Counter” para nêutrons do 241Am/Be

Figura 5: Espectro de energia dos nêutrons produzidos por uma fonte de 241Am/Be (ISO 8529-1)

Figura 4: Esquema de uso do cone de sombra para medir a con-tribuição da radiação espalhada. Na imagem superior, o cone de sombra absorve o feixe direto da fonte que atingiria o detector. Na figura inferior, o detector mede ambas as radiações (direta e espalhada)

de forma que, para uma distância de 1m, teremos uma taxa de fluência de 1,82 n/cm2.s.

RESULTADOS

Foram efetuadas medidas com o LC variando sua angulação desde 0o até 360o em um corte transversal ao eixo principal do LC. Os resultados são apresentados na Fig. 6, onde se pode verificar que a resposta do detector é praticamente isotrópica. As variações de resposta observadas não excedem 1,5% da resposta média e são compatíveis com a incerteza experimental das medidas efetuadas.

As medidas referentes ao corte longitudinal em relação ao eixo principal do LC são apresentadas na Fig. 7, onde podem ser observados dois nódulos de eficiência bem pronunciados na direção frontal e traseira do equipamento.

Com o objetivo de reduzir a eficiência de medida na direção traseira, a qual é indesejável para o tipo de utilização a ser

Figura 6: Corte transversal ao eixo principal. Os resultados são normalizados para o ponto de máxima leitura

Figura 7: Corte longitudinal ao eixo principal. Os resultados são normalizados para o ponto de máxima leitura

dado ao LC, foi acrescentada uma blindagem adicional na parte traseira, composta de aproximadamente 10 cm de parafina borada. A parafina possui a função de reduzir a energia média dos nêutrons incidentes até próximo da energia de equilíbrio térmico (cerca de 0,025 eV) e o boro é utilizado por possuir uma alta seção de choque para absorção de nêutrons térmicos. As medidas efetuadas com a retro-blindagem citada são apresentadas na Fig. 7.

A fluência de nêutrons medida no detector (Φm) pode ser obtida pela relação,

Фm = c. fc (2)

onde c é o número de contagens (nêutrons) em um período determinado e fc é o fator de calibração de contagens para fluência.

Com os dados obtidos no experimento é possível obter o fator de calibração (dado em termos de fluência de nêutrons

Page 119: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009246

Federico, C.A. et al.

por contagem no detector) para o referido equipamento, para uma incidência frontal (0o), que é de (0.0564 ± 0.0026) n/cm2.ct para o detector sem retro-blindagem e (0,0584 ± 0.0028) n/cm2.ct para o detector retro-blindado. As medidas tomaram como ponto de referência o centro geométrico do detector.

CONCLUSÕES

A metodologia de medida utilizando uma fonte de nêutrons rápidos de 241Am/Be e a técnica do cone de sombra mostrou-se adequada para a determinação da resposta espacial do LC. Os resultados indicam que o LC apresenta uma resposta bastante direcional e a adição de retro-blindagem propicia sua utilização para fins de medida de radiação cósmica, com incidência de cima para baixo e com alta rejeição de nêutrons espalhados em outras direções.

AGRADECIMENTOS

Os autores agradecem à FINEP pelo apoio financeiro parcial e ao Dr. Artur Flávio Dias, pela cessão dos desenhos e dados relativos ao Long-Counter utilizado.

REFERÊNCIAS

Hajek, M., Berger, T. , Vana, N., 2004, “A TLD-Based Personal Dosemeter System for Aircrew Monitoring”, Radiation Protection Dosimetry, Vol. 110, Nº. 1-4, pp. 337-341.

IAEA, 2000, “Calibration of Radiation Protection Monitoring Instruments”, Safety Report Series 16, International Atomic Energy Agency, Vienna.

ISO, 2001, “Reference Neutrons Radiations. Characteristics and methods of production”, ISO 8529-1, International Organization for Standardization.

Slaughter, D., 1978, “An Operating Manual for the De Pangher Precision Long Couter”, Lawrence Livermore Laboratory, University of California.

Page 120: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 247

Francisco Carlos P. Bizarria*Institute of Aeronautics and Space

São José dos [email protected]

Silvana Aparecida BarbosaInstitute of Aeronautics and Space

São José dos [email protected]

José Walter P. BizarriaUniversity of Taubaté

Taubaté- [email protected]

João Maurício RosárioState University of Campinas

[email protected]

*author for correspondence

Evaluation of the impact of convolution masks on algorithm to supervise scenery changes at space vehicle integration padsAbstract: The Satellite Launch Vehicle developed in Brazil employs a specialized unit at the launch center known as the Movable Integration Tower. On that tower, fixed and movable work floors are installed for use by specialists, at predefined periods of time, to carry out tests mainly related to the pre-launch phase of that vehicle. Outside of those periods it is necessary to detect unexpected movements of platforms and unauthorized people on the site. Within that context, this work presents an evaluation of different resolutions of convolution mask and tolerances in the efficiency of a proposed algorithm to supervise scenery changes on these work floors. The results obtained from this evaluation are satisfactory and show that the proposed algorithm is suitable for the purpose for which it is intended.Keywords: Scenery supervision, Convolution mask, Digital image processing, Satellite Launch Vehicle.

INTRODUCTION

Nowadays, Satellite Launch Vehicles (SLV), developed in Brazil, require their parts to be integrated at specialized facilities at the launch center to allow tests to be carried out, mainly related to the pre-launch phase of the vehicle (Palmério, 2002). For the purposes of this integration, a specialized unit at the launch center called the Movable Integration Tower (MTI) is used.

This tower is supported by a rectangular box-shaped metal structure - the longest side of which is vertical - equipped with a rail-mounted horizontal movement subsystem.

There are planned strategic points on that structure for the installation of an overhead traveling bridge, an elevator, work platforms (fixed and movable), doors and other subsystems necessary to cope with the activities undertaken on the tower (Yamanaka, 2006).

The fixed and movable floors are installed at preset levels on the interior of the integration tower, which houses several pieces of equipment used to carry out the activities related to the integration and testing of subsystems embedded in the aforementioned vehicle.

The activities undertaken at those floor levels are performed by specialists who need to adhere to a sequence of preset activities at predetermined periods of time.

Outside of those periods of time it is necessary to detect unexpected movements of the movable platforms, the presence of unauthorized people or elements not related to the site where those activities are being carried out.

Within this context, the current work presents an evaluation of different resolutions of convolution mask and tolerances relating to the efficiency of a proposed algorithm to supervise changes in a predefined environment on those work floors.

The main goal of this work is to present the results obtained from the use of different convolution mask resolutions on the efficiency of the algorithm that performs successive comparisons of images, treated by the average filter method, to detect scenery changes at the pad dedicated to the integration of satellite launch vehicles.

METHODOLOGY

System Architecture

To evaluate the impact of the use of different resolutions of convolution mask on the efficiency of the algorithm presented in this work, a prototype was built which adopted the architecture shown in Fig. 1.

It may be observed in Fig. 1, that the prototype architecture is split into three main parts:

i) Tower Model with Fixed Floor, Movable Floor and Lighting.

Received: 04/05/09 Accepted: 29/10/09

Page 121: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009248

Bizarria, F.C.P. et al.

Figure 1. Architecture adopted to evaluate the efficiency of algorithm.

Figure 2. Prototype view.

ii) Video Camera.

iii) Host Computer with Digital Frame Grabber Board and Application Software to detect scenery changes.

The application software was developed to perform all phases foreseen in the algorithm presented in this work, including the display resources for the operators relating to the system of images used as reference in the comparisons and the images captured in continuous mode by the camera and frame grabber unit.

Prototype

A view of the components devised for the prototype to evaluate the use of different resolutions of convolution mask in the efficiency of the algorithm to detect changes of scenery at pads employed in the integration of satellite launch vehicles is presented in Fig. 2.

We chose one of the internal spaces, which are defined by the levels where the fixed and movable floors are located on the Tower Model, to evaluate the impact of the use of different resolutions of convolution mask on the efficiency of the algorithm to detect scenery changes.

Based on this, the Direct Lighting System was installed to illuminate the chosen space uniformly and to provide the Video Camera with proper operating conditions.

The positioning of the Video Camera optical system was chosen to provide the very best observation of the Movable Floors, in addition to the pads that are installed in close proximity to the Satellite Launch Vehicle body and are also used for physical access when the operators carry out their tasks on the vehicle.

Outside the period when the tasks are being undertaken, the occurrence of unexpected movement of these floors and/or the presence of unauthorized people and/or elements not related to the site are situations that should be identified immediately by the operation’s security team.

Starting with this identification, the security team should take every appropriate action aimed at either resolving the situations or minimizing their consequences.

The purpose of the Host Computer is to receive the Digital Frame Grabber Board and the Application Software that is able to detect the scenery changes.

The analog video signal created by the camera is received by the frame grabber board, which transforms it into a digital matrix based on the detected image.

In this figure it can be identified: the Movable Integration Tower, the Video Camera with a three-legged support and the Host Computer, which were used in the carrying out the practical tests of this work.

The Movable Integration Tower representational model was developed on a scale of about one to thirty-three and consists of five levels for the doors, five levels for the floors, stairs and a rail-mounted horizontal movement subsystem.

Inside the tower there is a representational model of the Satellite Launch Vehicle.

This set-up, comprising the tower and the vehicle, represents the minimum requirement to reflect the real environment where it might be expected to detect changes of scenery.

The color Video Camera chosen to perform these tests is a 512 by 512 pixel resolution CCD (Charge Coupled Device) model.

Page 122: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 249

Evaluation of the impact of convolution masks on algorithm to supervise scenery changes at space vehicle integration pads

The Host Computer is an IA-32 model (Intel Architecture – 32 bits), Intel Pentium IV Processor, 2.8GHz frequency, 1GByte of RAM, running Microsoft® Windows XP operating system.

A Digital Frame Grabber Board and Application Software to detect scenery changes are installed on the host computer.

The frame grabber board used in the tests is manufactured by National Instruments®, namely the PCI-1405 model (NI Vision, 2007).

The Application Software that runs the algorithm presented in this work was developed in the integrated development environment of National Instruments®, LabVIEW 7.1 with the IMAQ Vision module (Klinger, 2003).

Algorithm

Using the prototype developed for this work, we discovered that when two successive images from the same scene are captured, with no apparent change detectable by a human observer, an immediate comparison of the respective levels of gray between each pixel making up these images, led to several different results.These differences mainly related to:

i) Variations in scene lighting.

ii) Accuracy of the camera’s optical and electronic system.

iii) Accuracy of the image acquisition board.

iv) Relative movement between the supervised site and the camera.

v) Variations in the transmission medium of the supervised image.

In order to perform the detection of scenery changes, at a satellite launch vehicle integration pad, this work suggests using an algorithm that minimizes the difference in comparison of gray levels between pixels, by the application of a digital filter on those images. The algorithm, basically, performs, the followings steps:

1st Step: To establish the value of:

i) Tolerance that will be used in the comparison between gray levels of the pixels contained in the image matrix obtained from the supervised scenery with their equivalents in the standard image.

ii) Tolerance in percentage that must be used in the comparison between the number of valid pixels of the image from the supervised scenery and the total number of pixels from the standard image used as reference in the comparisons of the system.

iii) Mask resolution that will be applied in the convolution operations.

2nd Step: To begin the continuous capture of images from the supervised scene using the Digital Frame Grabber Board.

3rd Step: To choose and capture an image from the scene that is related to the desired condition for the supervised site.

4th Step: Does it apply convolution to the captured image with the spatial average filter?

If “yes”, to apply the mentioned filter and perform the following step (5th Step).

If “no”, only to perform the following step (5th Step).

5th Step: To memorize this captured image to be used as a standard in future comparisons undertaken by the scenery change detection system.

6th Step: To restart the continuous capture of images from the supervised scenery.

7th Step: To perform the same processing applied to the standard image on the images that will be used for comparison.

8th Step: To compare successively, with tolerances, all pixels included in the matrix obtained from the scene with their equivalents from the standard image. For each comparison that presents a pixel value within that tolerance, to increase the accumulator number one (Acc1).

9th Step: To determine the value of accumulator number two (Acc2) from the division between the value obtained in accumulator number one (Acc1) by the quantity of pixels from the image matrix used by the system.

10th Step: If the value included in accumulator number two (Acc2) is within the percentage of foreseen tolerance, it is considered that the captured image is equal to the standard image. If the value included in accumulator number two (Acc2) exceeds the tolerance, it is considered that the standard image is different from the captured image.

Page 123: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009250

Bizarria, F.C.P. et al.

11st Step: Does the comparison of images stop?

If “no”, to perform the sixth step (6th Step).

If “yes”, to perform the next step.

12nd Step: Does it finalize the Application Software?

If “no”, to perform the second step (2nd Step).

If “yes”, to end the Application Software.

The analytical flowchart representing the steps foreseen in the algorithm to detect scenery changes is shown in Fig. 3.

To render efficiently the system adopted in image comparison by the algorithm that detects scenery changes, within this application software, it was necessary to process the standard image and the image captured from the supervised scene.

This treatment is represented in the fourth step (4th Step) of the proposed algorithm, the aim of which is to minimize the previously mentioned variations caused by the lighting, video camera, digital frame grabber board and other interference present at the test environment.

In this context, several kinds of spatial filters were tested, however the best results obtained, in the case studied for this work, was using spatial convolution by average filter (Gonzales, 1992).

The resolution of that average filter impacts on the system performance in detecting scenery changes, which means, depending on the kind of mask used and tolerance adopted, the algorithm itself becomes more or less accurate in its identifications.

Using the Application Software presented in this work, it is possible to implement an average filter with masks of coefficients equal to one (1) and resolution: i) three by three (3x3 matrix), ii) five by five (5x5 matrix), iii) seven by seven (7x7 mask) and iv) nine by nine (9x9 matrix) (Barbosa, 2008).

Eq. (1) presents the masks: i) three by three (Af 3x3) with standardization factor nine (9) and ii) five by five (Af 5x5) with standardization factor twenty-five (25).

(1)

Eq. (2) presents the masks: i) seven by seven (Af 7x7) with standardization factor forty-nine (49) and ii) nine by nine (Af 9x9) with standardization factor eighty-one (81).

(2)

Practical Tests

During the implementation of the practical tests performed to validate the use of different resolutions of convolution mask and tolerances in the efficiency of the proposed algorithm to detect scenery changes at satellite launch vehicle integration pads, the prototype presented in Fig. 2 was used with the installation of Application Software that corresponds to the analytical flowchart shown in Fig. 3.

In this context, what is presented is one of the sequences used in executing those tests. The sequence starts with the definition of the following values that are used by the algorithm to detect scenery changes, which means, parameterization of values for use of the application software:

i) Tolerance used in the comparison between levels of gray: 4 (four).

ii) Percentage of tolerance used in the comparison between valid pixels: 95 (ninety-five).

iii) Resolution of the convolution mask: 5x5 (5x5 Matrix).

The next action in that sequence relates to the capture and treatment of images that will be used as a standard to perform the comparisons.

Fig. 4 shows part of the main screen of the application software, with the indication of the window that is dedicated to receive the adopted image as a standard in the comparisons that the system will perform. It should be mentioned that the standard image is subjected to the average filter.

After defining the values for the parameters of comparison and of the standard image, it is possible to start the detection of scenery changes by using the continuous capture of images.

This action will constantly perform the comparison between the current image captured by the system with the standard image, based on the algorithm presented.

Page 124: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 251

Evaluation of the impact of convolution masks on algorithm to supervise scenery changes at space vehicle integration pads

Fig. 5 presents an example of an image considered equal to the image adopted as standard (Fig. 4) by the algorithm for the detection of scenery changes.

Fig. 6 shows an example of an image captured by the system which when compared to the image adopted as standard (Fig. 4) indicates a difference due to the algorithm implemented.

Figure 3. Flowchart for detection of scenery changes.

Figure 4. Image adopted as standard.

Figure 5. Image considered equal to the standard.

Figure 6. Image indicating a person on the elevated floor.

Figure 7. Image with indication of movement of the elevated floor.

The figure also shows in a specific window, the pixels from a captured image that are considered different when compared to the corresponding pixels from the image adopted as standard (Fig. 4).

Fig. 7 shows another example of an image captured by the system when compared to the image adopted as standard (Fig. 4) generating an indication of difference due to the algorithm.

Page 125: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009252

Bizarria, F.C.P. et al.

Other tests were carried out to evaluate the impact of the use of different resolutions of convolution mask and tolerances on the efficiency of the algorithm that detects scenery changes at Satellite Launch vehicle integration pads.

Fig. 8 shows the results of those comparisons considering that the scenery was not changed, adopting a resolution for image processing of 640 by 480 pixels and mask 3 by 3.

Fig. 10 shows the results of the comparisons for the same conditions as in Fig. 8, except that the mask used is 7 by 7.

Figure 8. Results obtained in comparison of equal images with mask 3x3.

Figure 9. Results obtained in comparison of equal images with mask 5x5.

Fig. 9 shows the results of the comparisons for the same conditions as in Fig. 8, except that the mask used is 5 by 5.

Figure 10. Results obtained in comparison of equal images with mask 7x7.

Figure 11. Results obtained in comparison of equal images with mask 9x9.

Fig. 11 shows the results of the comparisons for the same conditions as in Fig. 8, except that the mask used is 9 by 9.

Fig. 12 shows the results of those comparisons considering that the scenery was changed, adopting an image processing resolution of 640 by 480 pixels and mask 3 by 3. It means that the standard image is compared to another image that reveals the changed position of one of the elevated floors.

Page 126: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 253

Evaluation of the impact of convolution masks on algorithm to supervise scenery changes at space vehicle integration pads

Fig. 13 shows the results of the comparisons for the same conditions as in Fig. 12, except that the mask used is 5 by 5.

The resolution adopted for the masks used in the average filter impacts directly on the performance of the algorithm in detecting changes of scenery, that is, the accuracy increases when the resolution of the mask is increased.

For the specific conditions of the tests performed in this work the following combinations of parameters in the convolution operations and comparison between pixels may be adopted:

i) Mask three by three with tolerance equal to five;

ii) Mask five by five with tolerance equal to four;

Figure 12. Results obtained in the comparison of different im-ages with mask 3x3.

Figure 13. Results obtained in the comparison of different im-ages with mask 5x5.

Fig. 14 shows the results of the comparisons for the same conditions as in Fig. 12, except that the mask used is 7 by 7.

Fig. 15 shows the results of the comparisons for the same conditions as in Fig. 12, except that the mask used is 9 by 9.

CONCLUSIONS

The results observed in the practical tests performed with the prototype presented in this work show that:

Figure 14. Results obtained in the comparison of different im-ages with mask 7x7.

Figure 15. Results obtained in the comparison of different im-ages with mask 9x9.

Page 127: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009254

Bizarria, F.C.P. et al.

iii) Mask seven by seven with tolerance equal to three;

iv) Mask nine by nine with tolerance equal to two.

Considering the combinations mentioned, the one that requires the least processing by the host computer is the parameterization with mask three by three and tolerance equal to five, as shown in the Fig. 8 and Fig. 12.

In this work the anticipated goals were fully achieved mainly concerning the evaluation of different resolutions of convolution mask and tolerances in the efficiency of the algorithm that detects changes of scenery at satellite launch vehicle integration pads.

REFERENCES

Barbosa, S. A., 2008, “Visão Artificial Aplicada na Detecção de Mudança de Cenários: Estudo de Caso em Plataforma de Integração de Veículos Espaciais”, Ph.D. Thesis, University of Campinas, S.P., Brazil.

Gonzalez, R.C., Woods, R.E., 1992,“Digital Image Processing”, Addison-Wesley Publishing Company, Inc., pp. 134-138, Boston, MA, USA.

Klinger, T., 2003, “Image Processing with Labview and Imaq Vision”, Prentice Hall PTR, Upper Saddle River, New Jersey, USA.

NI Vision - NI PCI-1405, 2007, “User Manual - Single-Channel Color Image Acquisition Device”, National Instruments.

Palmério, A. F., 2002, “Introdução à Engenharia de Foguetes”, Technical copy-book of training performed at Institute of Aeronautics and Space, São José dos Campos, S.P., Brazil, pp. 23-25.

Yamanaka, F., 2006, “Análise de Faltas em Modelo Representativo de Sistema Elétrico Proposto para Plataforma de Lançamento de Veículos Espaciais”, Master Thesis in Mechanical Engineering – Technological Institute of Aeronautics, São José dos Campos, S.P., Brazil, pp. 20-22.

Page 128: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 255

Valdirene Aparecida da SilvaInstitute of Aeronautics and Space

São José dos Campos - [email protected]

José Jesus PereiraUniversity of Taubaté

Taubaté – [email protected]

Evandro Luís NoharaUniversity of Taubaté

Taubaté – [email protected]

Mirabel Cerqueira Rezende*Institute of Aeronautics and Space

São José dos Campos - [email protected]

* author for correspondence

Comportamento eletromagnético de materiais absorvedores de micro-ondas baseados em hexaferrita de Ca modificada com íons CoTi e dopada com LaResumo: Materiais Absorvedores de Radiação Eletromagnética (MARE) são compostos que absorvem a radiação eletromagnética incidente em determinadas faixas de frequências e a dissipam sob a forma de calor. Esses materiais são obtidos a partir do processamento adequado de matrizes poliméricas incorporadas com compostos que atuam como centros absorvedores da radiação incidente, na faixa de micro-ondas. Este trabalho mostra a avaliação eletromagnética de MARE processados pelo uso de uma hexaferrita de cálcio modificada pela incorporação dos íons CoTi e La. A substituição realizada pelos íons mencionados mostra, via análises de MAV, que a ferrita apresenta baixos valores de magnetização de saturação (123,65 Am2/kg) e de campo coercitivo (0,07 T), indicando o seu amolecimento. Amostras de MARE preparadas com diferentes concentrações desta hexaferrita (40 – 80% em massa) apresentam mudanças nos parâmetros complexos de permeabilidade e permissividade e no desempenho da atenuação da radiação incidente. Valores de atenuação da radiação incidente entre 40 e 98% são obtidos. Palavras-chave: Materiais absorvedores de radiação, Caracterização eletromagnética, Hexaferrita, Refletividade.

Electromagnetic behavior of radar absorbing materials based on Ca hexaferrite modified with Co-Ti ions and doped with LaAbstract: Radar Absorbing Materials (RAM) are compounds that absorb incidental electromagnetic radiation in tuned frequencies and dissipate it as heat. Its preparation involves the adequate processing of polymeric matrices filled with compounds that act as radar absorbing centers in the microwave range. This work shows the electromagnetic evaluation of RAM based on CoTi and La doped Ca hexaferrite. Vibrating Sample Magnetization analyses show that ion substitution promoted low values for the parameters of saturation magnetization (123.65 Am2/kg) and coercive field (0.07 T) indicating ferrite softening. RAM samples obtained using different hexaferrite concentrations (40-80 per cent, w/w) show variations in complex permeability and permittivity parameters and also in the performance of incidental radiation attenuation. Microwave attenuation values between 40 and 98 per cent were obtained. Keywords: Radar absorbing materials, Electromagnetic characterization, Hexaferrite, Reflectivity.

INTRODUÇÃO

Materiais Absorvedores de Radiação Eletromagnética (MARE) – em inglês tradicionalmente conhecidos como RAM – Radar Absorbing Materials – são materiais constituídos por compostos que proporcionam perdas de energia da radiação eletromagnética. Em determinadas faixas de frequências, esses materiais atenuam a radiação

da onda eletromagnética incidente e dissipam a energia absorvida sob a forma de calor, por meio de mecanismos internos, magnéticos e/ou dielétricos. Esses mecanismos de perdas podem ser de naturezas física, química ou, simultaneamente, ambas (Dias, 2000; Nohara, 2003; Folgueras, 2005; Hallynck, 2005; Pereira, 2007).

Considerando-se aplicações desses materiais no setor militar, pode-se dizer que a energia espalhada por um alvo (eco-radar), que seria utilizada para a sua detecção por

Received: 20/09/09 Accepted: 06/11/09

Page 129: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009256

Silva V.A. et al.

meio de um radar, é atenuada e o objeto revestido com MARE torna-se mais difícil de ser detectado ou, como divulgado na literatura, “invisível” ao radar. No setor civil da sociedade muitas são as aplicações de MARE, podendo-se citar seus benefícios de uso nas áreas de telecomunicações, no revestimento de aparelhos celulares e antenas de rádio-transmissão; médica, por exemplo, no revestimento de marcapassos; eletrônica, no revestimento de câmaras anecóicas utilizadas em setores de pesquisa e de controles industriais; de eletrodomésticos em geral, na blindagem eletromagnética e no controle de interferências, entre outras aplicações (Pinho et al., 1999 e Nohara, 2003).

Os materiais absorvedores são materiais compósitos normalmente utilizados como recobrimentos, os quais podem apresentar-se de várias formas, como placas elastoméricas de polímeros à base de poliisopreno e policloropreno; mantas flexíveis de diferentes tipos de borrachas; tintas à base de resinas epoxídicas, fenólicas e poliuretânicas e espumas de precursores naturais e sintéticos (Gupta et al., 1994; Dias, 2000).

A possibilidade de ajustar as propriedades elétricas e magnéticas desses materiais, de tal forma a otimizar a atenuação das micro-ondas incidentes, em frequências específicas ou em um amplo espectro de frequências, é uma das características mais importantes desses compósitos. Também são características relevantes e continuamente investigadas na área de MARE: a durabilidade, baixa densidade, baixo custo, o desempenho em uma ampla faixa de frequências e, também, a facilidade de aplicação (Pinho et al., 1999; Petrov e Gagulin, 2001; Paulo et al., 2004 e Yusoff e Abdullah, 2004). Os materiais absorvedores podem apresentar-se como ressonantes, ou seja, materiais que atuam em banda estreita de frequências ou, ainda, como absorvedores tipo banda larga, também denominados de absorvedores intrínsecos, que atuam em faixas mais largas de frequências.

O primeiro material magnético conhecido pelo homem foi a magnetita, Fe3O4, que ficou conhecida como ferrita de ferro. Em 1948, Néel (in Lax e Button, 1962) apresentou uma quantidade de fenômenos básicos de interação spin-spin, que ocorria nas ferritas. As ferritas constituem uma classe muito importante de materiais magnéticos, por conterem íons magnéticos arranjados, que podem gerar magnetização espontânea, enquanto mantêm boas propriedades dielétricas. Ferritas são usualmente obtidas por síntese estequiométrica de misturas de certos óxidos metálicos em altas temperaturas (1000 a 1500°C) (Von Aulock, 1965; Meshram et al., 2003 e Ribeiro, 2006).

As propriedades magnéticas das ferritas estão relacionadas com os elétrons da camada incompleta dos íons do metal de transição. O elétron gera um campo magnético em torno

do átomo e do seu próprio eixo denominado spin, esses movimentos geram um campo magnético, denominado dipolo magnético. As perturbações causadas pelos dipolos magnéticos caracterizam o momento magnético. A interação dos momentos magnéticos induzidos por um campo magnético externo aplicado resulta nas propriedades magnéticas macroscópicas dos materiais (Paulo, 2006). A soma desses momentos dá o momento magnético do átomo (Nedlov, Milenova e Dishivsky, 1994; Meshram et al., 2003; Nohara, 2003).

As ferritas podem ser consideradas como os “centros de absorção” de radiação eletromagnética mais antigos e mais utilizados na tecnologia de processamento de MARE.

O desenvolvimento de ferritas hexagonais tem recebido significativa atenção, visando a sua aplicação no processamento de materiais absorvedores de micro-ondas, que atendem a faixa de frequências de 1-100 GHz (Horvath, 2000; Petrov e Gagulin, 2001). Devido às suas características de magnetização permanente, as ferritas hexagonais recebem a denominação de materiais duros, enquanto que as ferritas espinélio e granada, por sua vez, por possuírem elevada permeabilidade e facilidade de magnetização, sob a influência de um campo magnético externo, possuem a designação de materiais moles (Cabral, 2005; Lima, 2007). As ferritas duras são compostas, normalmente, por óxidos férricos e óxidos de bário, cálcio, estrôncio ou chumbo, normalmente na proporção de MeO6Fe2O3 (ferrita tipo M), embora existam outras estequiometrias como, BaM2Fe16O27 (tipo W), Ba2M2Fe12O22 (tipo Y) e Ba3M2Fe24O41 (tipo Z), onde Me representa um metal de transição dopante de valência (II) (Horvath, 2000; Cabral, 2005).

Atualmente, com o objetivo de melhorar o desempenho das ferritas como centros absorvedores de radiação eletromagnética, a substituição de íons tem sido largamente estudada, de modo a ajustar a ferrita à faixa de frequências de interesse de aplicação do material absorvedor. Cátions metálicos ou a combinação de cátions podem diminuir a anisotropia magnetocristalina, proporcionando às ferritas novas propriedades com variadas aplicações (Bueno, 2003; Lima, 2007; Paulo, 2006). Aplicações eletrônicas requerem do material o controle das propriedades magnéticas, como homogeneidade, tamanho e formato das partículas, dependência da coercividade (Hc) em função da temperatura e polarização (Lima, Leandro e Ogasawara, 2003; Rewatkar, Patil e Gawali, 2005; Sláma et al., 2005).

Íons como Co2+ e Ti4+ são bons substituintes em hexaferritas, por promoverem a redução do campo anisotrópico, diminuir Hc, sem causar uma redução significativa da polarização magnética, possibilitando seu uso como centro

Page 130: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 257

Comportamento eletromagnético de materiais absorvedores de micro-ondas baseados em hexaferrita de Ca modificada com íons CoTi e dopada com La

absorvedor no processamento de MARE em faixas mais baixas de frequências (Zhou et al., 1994; Rewatkar, Patil e Gawali, 2005). Hexaferritas apresentam normalmente ressonância natural em altas frequências, acima de 45 GHz, como mostra a literatura (Michalikovh et al., 1994).

A literatura mostra, também, que os parâmetros de hexaferritas de Ca e Ba podem ser modificados pela substituição de íons Fe3+ por outros cátions ou por cátions combinados como Co-Ti, Zn-Ti, Zn-Sn, Co-Sn, Ni-Zr e Co-Mo (Horvath, 2000; Petrov e Gagulin, 2001; Feng e Jen, 2002; Haijun et al., 2002; Qiu, Zhang e Mu, 2005).

Estudos de substituição dos íons Fe3+ e Ba2+ em hexaferritas de Ba por íons combinados Zn-Sn, Co-Sn, Ni-Zr e Co-Mo têm mostrado bons resultados na variação das propriedades magnéticas da ferrita de bário (Qiu, Zhang e Mu, 2005). A hexaferrita de bário é um material magnético duro com elevada saturação de magnetização, alta coercividade, campo anisotrópico magnético elevado e excelente estabilidade química. Nesse sentido, a substituição do Ba2+ por La3+ tem apresentado bons resultados por promover o amolecimento da hexaferrita e manter a estabilidade da mesma.

Como já mencionado, as hexaferritas do tipo M apresentam a proporção MeO6Fe2O3, onde Me é um metal alcalino terroso Ba, Ca, Sr ou Pb, de estrutura complexa, mas de apenas um eixo c, que é um eixo de fácil magnetização em sua estrutura básica. Magnetização essa que não pode ser alterada para outros eixos, devido às suas características de materiais duros (Narang e Hudiara, 2006; Rewatkar, Patil, Gawali, 2005; Sláma, et al., 2005).

Nesse sentido, visando contribuir para a consolidação da área de processamento de MARE no IAE, pelo uso de novas formulações de ferritas, este trabalho mostra o estudo da caracterização eletromagnética (medidas de permissividade, permeabilidade e refletividade), na faixa de 8,2 a 12,4 GHz, de absorvedores preparados pelo uso de uma hexaferrita de Ca, ([Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0), intencionalmente modificada com íons CoTi e La, de modo a favorecer a sua atuação como centro absorvedor de micro-ondas na banda X.

MATERIAIS E MÉTODOS

Materiais

O presente trabalho utilizou uma hexaferrita de Ca de fórmula [Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0. Esta hexaferrita foi preparada neste estudo com a incorporação intencional dos íons CoTi e La, na empresa Sontag/SP. Essa amostra foi obtida com sucesso pelo processo de

metalurgia do pó, conforme procedimento descrito na literatura (Singh et al., 1999).

Preparação dos corpos-de-prova

Para a preparação dos corpos-de-prova de MARE foi utilizada uma matriz polimérica epoxídica bicomponente, do tipo Araldite Profissional, da CIBA. Os corpos-de-prova de MARE foram preparados pela mistura de 40, 50, 60, 70 e 80% em massa da hexaferrita na resina e vazados em moldes metálicos de latão de (23 x 11 x 10) mm3, obtidos do próprio trecho de guia de ondas utilizado na caracterização eletromagnética. Este procedimento foi adotado para garantir o encaixe perfeito do corpo-de-prova no guia de ondas, sem a presença de vãos entre as paredes desse dispositivo e a amostra. Assim, todos os corpos-de-prova obtidos apresentavam medidas exatas às dimensões internas do guia de ondas, eliminando, assim, possíveis erros na caracterização eletromagnética.

A definição das espessuras dos corpos-de-prova para estas medidas foi baseada na metodologia de Nicolson Ross (ASTM, 2008), detalhadamente descrita por Pereira (Pereira, 2007). O ajuste das espessuras dos corpos-de-prova foi realizado por lixamento manual e o controle desse parâmetro e do paralelismo das faces foi realizado com um paquímetro. A cura dos corpos-de-prova ocorreu à temperatura ambiente, por 24 h.

Medidas de Magnetização

As medidas de magnetização da hexaferrita preparada foram realizadas por meio de um equipamento tipo Magnômetro de Amostra Vibrante (MAV). A análise foi realizada diretamente no pó da ferrita obtida. O MAV utilizado foi montado e desenvolvido no Laboratório de Magnetismo e Materiais Magnéticos (LMMM) do Departamento de Física Teórica e Experimental (DFTE) da UFRN/Natal.

Caracterização Eletromagnética

A caracterização dos corpos-de-prova de MARE foi realizada por duas metodologias diferentes. A primeira envolveu medidas dos parâmetros S (S11, S21, S12 e S22), a partir dos quais foram calculados os valores de permeabilidade e permissividade complexas, conforme norma ASTM (ASTM, 2008) e a segunda envolveu medidas de refletividade com placa metálica. Ambas as medições foram realizadas pelo uso de guia de ondas retangular, conhecido como “Método de Linha de Transmissão” ou “Método de Transmissão/Reflexão

Page 131: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009258

Silva V.A. et al.

por Ondas Guiadas”, na faixa de frequências de 8,2 a 12,4 GHz (banda X). Para isto, foi utilizado um analisador de redes vetorial (Network Analyser System), modelo 8510C, da antiga empresa HP, atual Agilent Technologies, equipado com kit de calibração WR 90. Todo o equipamento de caracterização foi devidamente calibrado antes da realização das medidas. Os valores obtidos de permeabilidade, permissividade e refletividade apresentam uma incerteza calculada de +/- 0,5%. Nas medidas de refletividade, parâmetro S11, o corpo-de-prova de MARE foi acondicionado no porta-amostra sobre uma placa metálica de alumínio (condutora), que é tida como a referência, ou seja, material 100% refletor.

Os parâmetros S são definidos em uma matriz que contém informações sobre as propriedades de espalhamento das ondas eletromagnéticas, onde S11 e S22 representam a energia refletida e S12 e S21 a energia transmitida (Pereira, 2007). De acordo com o princípio de conservação de energia, a onda eletromagnética incidente no material, Ei, e sua energia, pode ser total ou parcialmente refletida (Er), atenuada (Ea), ou transmitida (Et). Esta última representa a energia que passa através da estrutura do material e não é absorvida ou refletida (Folgueras, 2005; Nohara, 2003; Pereira, 2007). A Equação 1 representa o somatório das energias refletida, transmitida, absorvida e dissipada (Ed).

Ei = Er + Et + Ea + Ed (1)

A Figura 1 (Pereira, 2007) apresenta, esquematicamente, as variáveis relacionadas com a interação da onda eletromagnética-MARE dentro do dispositivo utilizado no método de linha de transmissão.

Figura 1: Esquema do dispositivo utilizado no método de linha de transmissão, em guia de ondas. Ei – En-ergia incidente, Er – Energia refletida, Et – Energia transmitida, Ed – Energia dissipada, Ea – Energia absorvida pelo material (Pereira, 2007).

Tabela 1: Relação entre refletividade e a porcentagem da energia absorvida (Lee, 1991).

Atenuação da radiação (dB)

Absorção da radiação incidente (%)

0 0

-3 50

-10 90

-15 96,9

-20 99

-30 99,9

-40 99,99

Refletividade (dB) = log 10 Er/Ei (2)

A espessura dos corpos-de-prova para as determinações da permeabilidade e permissividade, a partir do parâmetros S, segundo o modelo de Nicolson-Ross, deve estar no intervalo de espessuras elétricas (lg) entre lg/18 e lg/2, que representa de 20º a 180º de fase do comprimento de onda guiada no interior do corpo-de-prova, respectivamente (ASTM, 2008; Agilent Technologies, 2005b; Pereira, 2007), sendo a espessura elétrica ótima igual a lg/4 (90º). É importante evitar uma espessura que promova o cancelamento de fases da onda eletromagnética, pois isso induz erros no algoritmo de Nicolson-Ross para o cálculo dos valores de permeabilidade e permissividade. Por esse motivo, foi realizada uma varredura de espessuras (3,0-10 mm, em passos de 0,1 mm) para cada concentração de hexaferrita na amostra, a fim de checar se ocorria cancelamento de fases, bem como obter valores de espessuras próximos de lg/4, para a obtenção dos parâmetros permeabilidade e permissividade, na faixa de frequências entre 8,2 – 12,4 GHz. Os valores ótimos de espessuras são apresentados na Tabela 2.

Tabela 2: Valores das espessuras das amostras para a determinação dos parâmetros S e utilizados nos cálculos dos valores de permeabilidade e permissividade, segundo o modelo de Nicolson-Ross.

Concentração de hexaferrita na resina epóxi (% em massa)

Espessura (mm)

40 4,47

50 4,75

60 3,45

70 3,40

80 3,05

A refletividade é a relação entre a energia eletromagnética refletida pelo material e a energia incidente no material e é expressa em dB (decibel - 1 décimo de um bel), representada na Equação 2 (Pereira, 2007). A relação entre a atenuação em dB e a porcentagem da radiação eletromagnética absorvida (energia absorvida pelo material) é apresentada na Tabela 1 (Lee, 1991).

Page 132: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 259

Comportamento eletromagnético de materiais absorvedores de micro-ondas baseados em hexaferrita de Ca modificada com íons CoTi e dopada com La

RESULTADOS E DISCUSSÃO

Medidas magnéticas

A Figura 2 apresenta a curva de histerese da hexaferrita de Ca ([Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0). Observa-se pelo gráfico que o valor de magnetização de saturação atinge 123,65 Am2/kg e o campo coercitivo (Hc) é bastante reduzido, com valor igual a 0,07 T. Verifica-se, ainda, a tendência de histerese com baixas perdas de energia. No entanto, sabe-se que as hexaferritas são consideradas duras e possuem elevado campo coercitivo, com ressonância natural em altas frequências, acima de 45 GHz, como mostra a literatura (Michalikovh et al., 1994). Esses resultados e as baixas perdas na inversão do campo magnético (magnetização remanescente igual a 43,73 Am2/kg) indicam que a ferrita de Ca caracterizada trata-se de um material magnético mole. Assim, esses resultados confirmam que a modificação da hexaferrita de Ca, pela incorporação dos íons CoTi e La na sua estrutura, favoreceu o seu amolecimento, em concordância com a literatura (Singh, et al., 1999), deslocando, provavelmente, a sua ressonância natural para frequências mais baixas.

componente real da permissividade (εr’ - componente de armazenamento) e menos significativo do componente imaginário (εr” - componente de perdas). O comportamento verificado para as medidas de permissividade real (εr’) evidencia que existe uma concordância com as curvas de refletividade medidas (Figura 4), onde a atenuação mostra um aumento gradativo com o aumento da concentração da hexaferrita no absorvedor.

A Figura 3(a) refere-se à amostra com 40% (m/m) de ferrita, com o valor de εr’ igual a 4,0 em 8,2 GHz, seguido de uma queda para 3,92 no final da faixa (12,4 GHz). Já, o parâmetro εr” apresenta valores na faixa de 0,45 – 0,73 entre 8,2 e 12,4 GHz. Os valores complexos de permissividade real e imaginária do MARE com 50% (m/m) são apresentados na Figura 3(b). Nota-se que, os resultados encontrados para εr’ (4,56) não apresentam variações significativas, permanecendo praticamente constantes em toda a faixa de frequências. Já, o valor medido de εr” varia de 0,56 – 0,77, corroborando com a literatura (Singh, et al., 1999 e Hallynck, 2005).

Quando a concentração de hexaferrita na amostra é aumentada para 60% em massa (Figura 3(c)), os valores de permissividade real e imaginária, ou seja, os valores de armazenamento e dissipação dielétrica encontram-se em torno de 5,02 e de 0,71 – 0,83, respectivamente, em toda a faixa de frequências de medidas.

A Figura 3(d) apresenta os valores de εr’ e εr” do material absorvedor de micro-ondas com 70% em massa de ferrita. A análise desta figura mostra que há, comparativamente às outras amostras discutidas, um aumento nos valores de permissividade, tanto do componente real como do imaginário (6,75 – 6,82 para εr’ e 1,28 – 1,51 para εr”, em toda a faixa de frequências). A Figura 2(e) apresenta a curva referente à amostra com 80% de hexaferrita, em massa. Observa-se que, tanto os valores de εr’ como os de εr” apresentam um aumento com o acréscimo da concentração (9,88 – 9,55 e 2,31 – 2,38, respectivamente) em toda a faixa de frequências. Os componentes complexos da permeabilidade (m’ e m”) não apresentam variações na faixa de frequências de estudo.

A Figura 4(a) apresenta as curvas das medidas de refletividade (com placa de alumínio) das amostras de MARE com a hexaferrita de Ca modificada, nas concentrações de 40%, 50%, 60%, 70% e 80%, em massa. A linha reta em 0 dB (em preto) é representativa da placa metálica refletora (material de referência), ou seja, sem atenuação da onda eletromagnética incidente. A curva em preto representa o comportamento de re fletividade da amostra com 40% da hexaferrita, em massa. As espessuras das amostras de MARE nestas medidas são as mesmas utilizadas para a determinação dos parâmetros S (Tabela 2). Observa-se

-1,5 -1,0 -0,5 0,0 0,5 1,0 1,5

-140

-120-100

-80

-60

-40

-200

20

40

60

80100

120

140

Permeabilidade Magnética ()17,42 +ou- 0,4014

Campo Coercitivo (Hc)0,07 T

Magnetização remanescente (Mr)

43,73 Am2/Kg

Magnetização de Saturação (Ms)

123,65 Am2/Kg

Mag

netiz

ação

(Am

2 /Kg)

Indução Magnética (T)

Hexaferrita 2: Co.Ti.La.Ca.Fe

Figura 2: Medidas magnéticas da hexaferrita de Ca [Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0.

Medidas Eletromagnéticas

As medidas de permeabilidade (mr) e permissividade (εr) das amostras de MARE preparadas com a ferrita [Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0 são apresentadas na Figura 3. As diferentes concentrações da hexaferrita nas amostras de MARE conferem diferentes comportamentos dielétricos, uma vez que os valores de permeabilidade e permissividade complexas versus a frequência são propriedades extrínsecas das ferritas. A análise desta figura revela que o aumento da concentração de ferrita no compósito promove o aumento mais acentuado do

Page 133: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009260

Silva V.A. et al.

que, o valor de refletividade da onda incidente é de -5 dB em 8,3 GHz, que corresponde a, aproximadamente, 70% de atenuação da radiação incidente. Com 50% em massa de ferrita no material absorvedor, a refletividade é de -6,3 dB na frequência de 8,2 GHz ou, aproximadamente, 75% de atenuação (curva vermelha).

Aumentando-se para 60% em massa de hexaferrita no MARE, a refletividade apresenta um pico de ressonância de -5,9 dB na faixa de frequências de 9,7 – 10,4 GHz (curva verde). Um melhor desempenho do material absorvedor é verificado com o aumento da concentração da massa da ferrita para 70%. Observa-se, ainda, na

8 9 10 11 12 13

0,00,61,21,82,43,03,64,24,85,46,06,67,27,88,49,09,6

Hexaferrita CoTi 40% 4,47 mm

"

'

"

'

',

",'

, "

Freqüência (GHz)8 9 10 11 12 13

0,00,61,21,82,43,03,64,24,85,46,06,67,27,88,49,09,6

Hexaferrita CoTi 50% 4,75 mm

"

'

""

'

',

", '

,"

Frequencia (GHz)

8 9 10 11 12 13

0,00,61,21,82,43,03,64,24,85,46,06,67,27,88,49,09,6

Hexaferrita CoTi 60% - 3,45 mm

"

"

'

'

',

", '

,"

Frequencia (GHz)8 9 10 11 12 13

0,00,61,21,82,43,03,64,24,85,46,06,67,27,88,49,09,6

Hexaferrita CoTi 60% - 3,45 mm

"

"

'

'

',

", '

,"

Frequencia (GHz)

8 9 10 11 12 13

0,00,61,21,82,43,03,64,24,85,46,06,67,27,88,49,09,6

Hexaferrita CoTi 80% - 3,05 mm

"

'

"

'

',"

, ',"

Frequencia (GHz)

(a) (b)

(c) (d)

(e)

Figura 3: Medidas de permeabilidade e permissividade real e imaginária das amostras de MARE preparadas com a hexaferrita [Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0, nas concentrações de (a) 40%, (b) 50%, (c) 60%, (d) 70% e (e) 80%, em massa.

Page 134: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 261

Comportamento eletromagnético de materiais absorvedores de micro-ondas baseados em hexaferrita de Ca modificada com íons CoTi e dopada com La

Figura 4: Curvas de refletividade do MARE contendo a hexaferrita de [Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0 nas concentrações de 40%, 50%, 60%, 70% e 80%, em massa, nas espessuras (a) descritas na Tabela 2 e (b) de 2,90 mm.

(a)

(b)

curva em azul, que a ressonância atinge uma refletividade de -12,9 dB, em torno de 95% de atenuação da onda eletromagnética incidente, em 8,7 GHz. O aumento da concentração de hexaferrita no material absorvedor melhora o comportamento da refletividade, ou seja, melhora o desempenho do material absorvedor, pois tornam as características magnéticas e dielétricas do mesmo semelhante às de materiais policristalinos (Bueno, 2003; Lima, 2007; Paulo, 2006). Este comportamento é confirmado na curva de refletividade na cor ciano, a qual mostra uma ressonância próxima a -16,1 dB em 8,2 GHz, ou seja, aproximadamente 98% de atenuação da onda eletromagnética incidente, nota-se, também, que nesta concentração, a curva apresenta uma tendência de maior atenuação da onda em frequências mais baixas.

A Figura 4(b) apresenta as curvas das medidas de refletividade (com placa de alumínio) das amostras de MARE com 40%, 50%, 60%, 70% e 80%, em massa, da hexaferrita de Ca, na espessura de 2,90 mm. Com o decréscimo da espessura do corpo-de-prova para 2,90 mm, o máximo de atenuação diminui (-9,5 dB) e desloca-se para 10,3 GHz. Já, com 80% em massa de ferrita no MARE, o valor de atenuação da onda eletromagnética incidente melhorou (-25,3 dB) (Figura 3(b)), quando comparada com a amostra de espes sura de 3,05 mm, que mostra o valor de refletividade de -16,1 dB (Figura 4(a)).

A análise da Figura 4(a) mostra mais claramente para a amostra com 70% em massa de ferrita (curva em azul), o formato de “V”, sugerindo comportamento de atenuação por cancelamento de fases da onda. Esse mecanismo de atenuação é baseado na obtenção de uma espessura elétrica do MARE, ou seja, a onda eletromagnética refletida na parte frontal se cancela com a onda emergente do material, por estarem em fases invertidas de 0° e 180° (Pereira, 2007). Tem-se, ainda, o deslocamento da região de ressonância para mais baixas frequências, com o aumento da concentração de hexaferrita no material e a variação da espessura (Figura 4(b)). Estes resultados obtidos sugerem a aplicação dos materiais obtidos em frequências abaixo de 8 GHz.

CONCLUSÕES

A caracterização eletromagnética das amostras de MARE preparadas com a variação da concentração de ferrita de [Ca(CoTi)0,2Fe11,6O19]96,0[La2O3]4,0, de 40 a 80%, em massa, mostra valores de atenuação de -5,0 a -25,3 dB, que cor respondem de 70 a 99,5% de atenuação da onda incidente. Este comportamento observado é concordante com o aumento mais acentuado do componente εr’, que variou de, aproximadamente, 4 - 10 para as amostras de MARE. Estes resultados são atribuídos ao amolecimento do comportamento magnético da hexaferrita de cálcio, que após a incorporação dos íons CoTi e La deslocou, provavelmente, a sua frequência de ressonância para frequências mais baixas, viabilizando a sua utilização como centro absorvedor de micro-ondas na banda X (8,2 a 12,4 GHz).

AGRADECIMENTOS

Os autores agradecem à empresa Sontag/SP, na pessoa do Sr. Mário Dutra (in memoriam) pela preparação da amostra de ferrita, ao Departamento de Física Teórica e Experimental da UFRN pelas medidas de MAV, à FINEP (Processo 1757/04), ao CNPq (Processo 301583/06-3) e ao Comando da Aeronáutica pelo suporte financeiro.

Page 135: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009262

Silva V.A. et al.

REFERÊNCIAS

Agilent technologies, 2005, “Materials measurement software Agilent 85071E”, Palo Alto, CA: 2005b. 8 p. Agilent Technical Overview.

AMERICAN SOCIETY FOR TESTING AND MATERIALS, 2008, “ASTM D5568-08: Standard Test Method for Measuring Relative Complex Permittivity and Relative Magnetic Permeability of Solid Materials at Microwave Frequencies Using Waveguide” .

Bueno, A. C. , 2003, “Síntese e caracterização da ferrita de NiZn dopada com íons metálicos para aplicações em absorvedores de radiações eletromagnéticas”, Ph.D. Thesis , Engenharia Metalúrgica e de Materiais, COPPE/UFRJ, Rio de Janeiro, R.J., Brazil, 164p.

Cabral, A.J.O., 2005. “Síntese de Hexaferrita de Bário Dopada com Cobalto-Titânio por Moagem Quimicamente Assistida Seguida de Calcinação”, Ph.D. Thesis, Engenharia Metalúrgica e de Materiais, COPPE/UFRJ, Rio de Janeiro, R.J., Brazil.

Dias, J. C., 2000, “Obtenção de Revestimentos Absorvedores de Radiação Eletromagnética (2-18 GHz) Aplicados no Setor Aeronáutico”, Ph.D. Thesis, Instituto Tecnológico de Aeronáutica, São José dos Campos, S.P., Brazil.

Dishovski, N., Petkov, A., Nedkov, I. V., Razkazov, I.V., 1994, “Hexaferrite Contribution to Microwave Absorbers Characteristics”, IEEE Transactions on Magnetics. Vol. 30, No. 2, pp. 969-971.

Feng, Q.; Jen, L., 2002, “Microwave Properties of ZnTi-substituted M-type Barium Hexaferrites”, IEEE Transactions on Magnetics, Vol. 38, No. 2, pp. 1391-1394.

Folgueras, L.C., 2005, “Obtenção E Caracterização de Materiais Absorvedores de Microondas Flexíveis Impregnados com Polianilina”, Ph.D. Thesis, Instituto Tecnológico da Aeronáutica, São José dos Campos, S.P., Brazil.

Gupta, S. C., Agrawal, N. K., C., Kumar, M.V., 1992, “Design of a Single Layer Broadband Microwave Absorber Using Cobalt-Substituted Barium Hexagonal Ferrite”, IEEE Microwave Theory and Techniques Society, pp. 317-320.

Haijun, Z., Zhichao, L., Chengliang, M.A., XI, Y.; Liangying, Z.; Mingzhong, W., 2002, “Complex Permittivity, Permeability, and Microwave Absorption of Zn- and Ti-Substituted Barium Ferrite by Citrate Sol-Gel Process”, Materials Science and Engineering, Vol. B96, pp. 289-295.

Hallynck, S., 2005, “Elaboration et Caractérisations de Composites Chargés en Ferrite Spinelle à Morphologie Contrôlée Pour Utilisations Micro-Ondes”, Ph.D. Thesis, Docteur de L’Université Strasbourg I – Louis Pasteur Spécialité, Physique-Chimie des Matériaux, France.

Horvath, M.P., 2000, “Microwave Applications of Soft Ferrites”, Journal of Magnetism and Materials, Vol. 215-216, pp. 171-183.

Lax, B., Button, K.J., 1962, “Microwave Ferrite and Ferrimagnetics”. New York, McGraw-Hill Book Company, pp. 47, 92-95, 114-124.

Lee, S. M., 1991, “International Encyclopedia of Composites”. VCH Publishers, Vol.6.

Lima, R.C., 2007, “Propriedades absorvedoras de microondas de compostos epoxídicos de Y-hexaferritas de bário obtidas pelo método de combustão do gel de citrato”, Ph.D. Thesis , Engenharia Metalúrgica e de Materiais , COPPE/UFRJ, Rio de Janeiro, R.J., Brazil, 180p.

Lima, R.C., Leandro, J. C. S., Ogasawara, T., 2003. “Síntese e Caracterização da Hexaferrita de Bário Tipo m Dopada com Lantânio e Sódio para Utilização Como Absorvedor de Microondas”, Revista Cerâmica, Vol. 49, pp. 44-47.

Meshram, M. R., Agrawal, N. K., Sinha, B., Misra, P.S., 2003. “Characterization of (Co-Mn-Ti) Substituted M Type Barium Hexagonal Ferrite Based Microwave Absorber at X Band”, Antennas, Propagation and EM Theory, Vol. 28, pp. 746 -749.

Michalikovh, M., Gruskovh, A., Vicen, R., Lipka, J., Sláma, J., 1994, “Co, Ti, Mn-Precipitated Barium Hexaferrite Powders”, IEEE Ttransactions on Magnetics. Vol. 30, No. 2, pp. 654-656.

Narang, S. B., Hudiara, I.S., 2006, Microwave dielectric properties of M-Type barium, calcium and strontium hexaferrite substituted with Co and Ti, Journal of Ceramic Processing Research, Vol. 7, Nº. 2, pp. 113-116.

Nedlov, I., Milenova, L., Dishivsky, N., 1994, “Microwave Polymer-Ferroxide Film Absorbers”, IEEE Transactions on Magnetics, Vol. 30, Nº. 6, pp. 4545-4547.

Nohara, E.L., 2003, “Materiais Absorvedores de Radiação Eletromagnética (8-12 GHz) Obtidos pela Combinação de Compósitos Avançados Dielétricos e Revestimentos Magnéticos”, Ph.D. Thesis, Instituto Tecnológico de Aeronáutica, São José dos Campos, S.P., Brasil.

Page 136: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 263

Comportamento eletromagnético de materiais absorvedores de micro-ondas baseados em hexaferrita de Ca modificada com íons CoTi e dopada com La

Paulo, E.G., Pinho, M. S., Lima, C. R., Gregori, M. L., Ogasawara, T., 2004, “Compósitos de Ferrita de Ni-Zn com Policloropreno para Utilização como Materiais Absorvedores de Radar Para a Banda S”, Cerâmica, Vol. 50, Nº. 314, pp. 161-165.

Paulo, E. G., 2006, “Síntese e caracterização de ferrita de níquel ezinco nanocristalina por combustão, para aplicação em compósito elastomérico absorvedor de microondas. Tese de Mestrado, COPPE/UFRJ, Engenharia Metalúrgica e de Materiais, Universidade Federal do Rio de Janeiro, R.J.

Pereira, J. J., 2007, “Caracterização Eletromagnética de Materiais Absorvedores de Microondas Via Medidas de Permissividade e Permeabilidade Complexas na Banda X”, Universidade de Taubaté, Taubaté, S.P., Brazil.

Petrov, V. M., Gagulin, V.V., 2001, “Microwave Absorbing Materials”, Inorganic Materials, Vol. 37, Nº. 2, pp. 93-98.

Pinho, M.S., Lima, R.C., Soares, B.G., Nunes, R. C. R., 1999. “Avaliação do Desempenho de Materiais Absorvedores de Radiação Eletromagnética por Guia de Ondas”, Polímeros, Vol. 9, No. 4, pp. 23-26.

Qiu, J., Zhang, Q., Mu, G., 2005, “Effect of Aluminum Substitution on Microwave Absorption Properties of Barium Hexaferrite”, Journal of Applied Physics, Vol. 98, pp. 103905-1-103905-5.

Rewatkar, K.G., Patil, N. M., Gawali, S. R., 2005, “Synthesis and Magnetic Study of Co–Al Substituted Calcium Hexaferrite”, Bulletin of Materials Science, Vol. 28, Nº. 6, pp. 585–587.

Ribeiro, U.L., 2006, “Síntese e Caracterização de Nanoferritas a Base de Níquel-Zinco e Níquel-Cobre-Zinco”. Dissertação de Mestrado, Universidade Federal do Rio Grande do Norte, Natal, RN, Brazil.

Singh, P., Babbar, V. K., Razdan, A., Srivastana, S. L., Puri, R. K., 1999, “Complex Permeability and Permittivity, and Microwave Absorption Studies of Ca(CoTi)xFe12-2xO19 Hexaferrite Composites in X-Band Microwave Frequencies”, Materials Sciencie and Engineering, Vol. B67, pp. 132-138.

Sláma, J., Grusková, A., Papánová, M., Kevická, D., Jancárik, V., Dosoudil, R., Mendoza-Suárez, G., González-Angeles, A., 2005, “Properties of M–Type Barium Ferrite doped by Selected Íons”, Journal of Electrical Engineering, Vol. 56, Nº. 1-2, pp. 21–25.

Von Aulock, W.H., 1965, “Handbook of Microwave Ferrite Materials”, Academic Pres, New York, pp. 1-3, 15, 351, 452, 471.

Yusoff, A. N., Abdullah, M. H., 2004, “Microwave Eletromagnectic and Absorption Properties of Some LiZn Ferrites”, Journal of Magnetism and Materials, Vol. 269, pp. 171-180.

Zhou, X. Z., Morrish, A. H., Yang, Z., Zeng, H., 1994, “Co-Sn Substituted Barium Ferrite Particles”, Journal Applied Physics, Vol. 75, Nº. 10, pp. 5556-5558.

Page 137: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 138: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 265

Miriam C. Bergue Alves*Institute of Aeronautics and Space

São José dos Campos- [email protected]

Ana Marlene F. MoraisInstitute of Aeronautics and Space

São José dos Campos- [email protected]

* author for correspondence

The management of knowledge and technologies in a space programAbstract: This paper presents an ongoing work at the Institute of Aeronautics and Space (IAE) to provide a process and a system to support the management of knowledge and new technologies applied to the conception and development of the Brazilian Satellite Launcher Program. This management is not only necessary to organize the actual research efforts but also to identify communalities and necessities for the strategic planning of future research projects and development activities. The results of the research projects are usually new technologies that ought to be employed in the development of the Launcher Program. The proposed knowledge management system will not only enable assessing these new technologies but also help in defining and planning the research topics in each important area of this multidisciplinary program, according to the Institute’s strategic goals and space mission.Keywords: Space systems, Technology, Knowledge management, Research projects, System engineering.

INTRODUCTION

In general, space programs deal with complex systems that apply high technology as well as investigate solutions for new and singular problems that arise in the space realm. This is one of the reasons for the high dependency level on government and commercial stakeholders for space capabilities.

The mix of technical professionals from industry, academia, and government is widespread in these programs. This combination of expertise has to be well coordinated in order to have a well balanced investment in basic and applied research, resulting in the necessary technologies for the success of a space mission.

Another considerable concern is the strategic aspect of such technology. Nowadays, the partnership with other similar international institutions and academia is crucial for the sustainability of such programs and also for cost reductions. However, considering the growth in the commercial use of space technology, these partnerships have to be based on solid agreements between the governments in order to preserve their own interests.

Complex space systems also require serious systems engineering with careful planning and attention to the process. The management of knowledge and technology in these programs is becoming even more important as the complexity increases. The launching environment, for instance, is unique and the launching event lasts only

minutes, however it requires a great deal of integration of the effort and resources of the systems engineering.

Considering all the challenges ahead, the research and development process of new space technologies ought to be a well-controlled one, with clear goals defined, a timetable set up and a range of costs established. The human resources and risks also have to be managed appropriately. Given that resources are not unlimited, controlling this process is a “must” for the success of any space program.

This paper presents an ongoing work developed at IAE to provide a process and a knowledge management system to deal with new technologies applied in the conception and development of the Brazilian Satellite Launcher Program. The knowledge management is necessary not only for organizing the actual research efforts but also for identifying communalities and necessities for strategic planning of future research projects and development activities.

The next section presents the peculiarities of system engineering in a space program, followed by the section where the methodology is presented. Then some results are discussed, followed by the final conclusions.

THE COMPLEXITY OF SPACE SYSTEMS ENGINEERING

Systems engineering plays a fundamental role in projects that involve emerging technologies and can

Received: 23/09/09 Accepted: 29/10/09

Page 139: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009266

Alves, M.C.B., Morais, A.M.F.

be defined as management technology (Sage, 1992). Organization, projects, and production of scientific knowledge are intrinsically associated with the systems engineering life-cycle and, although space systems follow the same fundamental systems engineering principles as any other system, there are particularly challenging aspects of space systems that need special consideration.

Space systems engineering involves the challenges of a rigorous launching environment, where the structural elements of both launcher and satellite must be designed to resist the remarkable forces due to the thrust of the launcher motors and vibrational and acoustical forces. Sensitive electronics and sensor elements must also resist the shock conveyed by pyrotechnic devices used for the launcher’s stages separation and the deployment of a satellite. There are also tight constraints on both mass and volume that impact in costs.

Costs play an important role in these programs (Wertz and Larsen, 1996). The greater the lift capability is, the more expensive the launcher will be due to the orbit the mission wants to reach. The design of big structural elements demands new technologies and original concepts has to be developed.

All the elements and issues mentioned before have to be brought to a system engineering life-cycle, whereas a number of activities, grouped in phases, should be realized in an interactive and iterative way, with the purpose of delivering a product or system that fulfill the space mission requirements. In the case of space systems, a big part of the requirements are transformed in technical and functional requirements that typically demand new knowledge, therefore implying in research efforts.

Other mission requirements are converted in system management requirements that will guarantee the development of a product or the delivery of a service. Additionally, in a large and complex space system there is another management component, called knowledge management.

Technological projects, unlike most others, have the potential to fail to meet their goals. If it is a new technology, the implied risks are higher.

The areas of research involved in the Launcher Program encompass structural dynamics, control, thermal, space power, propulsion, and software development. Such multi-disciplined approach demands the best use of knowledge to achieve organizational objectives.

Innovation

Innovation means ideas applied successfully, differing for invention that is an idea made visible (Mckeown, 2008). Innovation generates technology and it results from research and experimentation, which implies that the organization has to clearly define its goals, and based on these goals establish and conduct the necessary research in a controlled way.

There is a myriad of possible ways to seek for solutions and produce advanced technologies in the context of a multidisciplinary and complex space program, so the conduction of research has to be carefully planned. In order to prioritize specific knowledge areas for investment, at least the following aspects should be considered:

• A thorough benchmarking of what is already available in the space market;

• The associated costs;• Acquisition crisis; • Unanticipated failure modes of the Program;• Development problems with individual elements that

can cripple the schedule and budget;• Uncertain technological changes.

Based on an analysis of these aspects, a plan has to be elaborated, including strategic goals and directives for the space program. This plan is crucial for kicking off a knowledge management process and organizing the research efforts.

Once the goals are established, it is extremely necessary for the organization to come up with a process to deal with the on-going projects portfolio and all the technologies that may be generated, including their results. The process to be adopted at IAE is presented in the “Methodology” Section and it is intended to provide a complete view of the Institute in terms of the main topics of research, projects and results, applied technologies, human resources involved in research, and related costs.

Most of the solutions for space systems problems depend on new knowledge, which has to be provided by new technology. The research is necessary to support the effective application of technology in support of the space program’s needs. Scientific research advances the frontiers of knowledge, and technically there is no final point.

It is highly desirable that the performance of the space systems is improved as an outcome of research, but the definition of the mission’s success has to be clear in order to avoid that these systems continuously remain under development.

Page 140: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 267

The management of knowledge and technologies in a space program

The establishment of an efficient system to deal with innovation at IAE should be based on concrete information about the relationship between the main research areas and the Launcher Program’s strategic goals. The research groups are composed by research topics, which have related research projects. The research projects are the main course of action to generate technology applied to the Launcher development.

Once this information is available and updated, actions may be taken in order to organize the research initiatives in a few and principal research groups that are aligned with the Institute’s strategic goals. In this manner, these groups will grow stronger with the organization’s support.

The core idea behind the advanced research projects at IAE is to pour over the necessary investments when conducting a space engineering project. Therefore, the results of these projects will be converted into requirements and technology applied to the Launcher Program. Consequently, a technology transfer plan may be elaborated to pass on the project’s findings to the development team, which works within the mission’s restrictions.

This knowledge transfer is fundamental to define a space system baseline for the designers to advance in the design and construction of the system, measuring time, sizes, and development costs.

As a collection of new technologies in their early development stages is produced, as a result of advanced research projects, it is essential to incorporate these emerging technologies in the Launcher development, envisioning a real mission.

However, incorporating emerging technologies into a real mission is not as simple as it appears to be. It is necessary to employ the adequate system engineering technique to enable the transition and insertion of these technologies into current and future space systems. This process does not start when the research project is concluded, but rather back to the initial research project proposal, when it should be established what results are expected and how the generated technology will enhance certain capabilities and/or provide new functionality for the space system. This information will be crucial when judging the research project proposal and its approval by the Institute, as established in the next section.

Since research projects are related to a diversity of areas, it was essential for IAE to set up an effective knowledge management system to keep total control of what has already being done, what is necessary, the health of the research projects, and the effective connection of these projects to the Launcher Program.

METHODOLOGY

The process of knowledge exploration

The process of knowledge exploration is aimed at fulfilling the established needs of new technologies to solve problems inherent to complex requirements of the space system. It is an organizational process for converting information into knowledge and making that knowledge accessible.

Complex space problems have to be broken down into smaller, more manageable pieces. These pieces are converted into subsystems that are undertaken with a multidisciplinary approach. In the case of IAE, there are five fundamental factors that are considered during the process of knowledge management and exploration in order to reach the desirable effectiveness and efficiency.

The first factor considered is the knowledge transfer factor. Its effectiveness will be achieved only when there is an efficient process, incentive, or reward for delivering a particular required technology.

The second factor is the research project’s relevance factor. At IAE, research projects are the means to advance in several research areas, hence generating technology. A multidisciplinary approach is employed to assess and approve new project proposals, based on their best use of knowledge and their alignment with systems engineering, avoiding to approve projects that bring up solutions for problems that were not even defined. There are specific criteria for project approvals and it involves the collective interest of stakeholders and the organization strategic objectives.

The human resource factor is very strategic in this context, and is the most valued asset of an organization (Armstrong, 2006) - the working team contributes individually and collectively to the achievement of the objectives of the primary mission. The knowledge and experiences acquired by the teams in former research projects, activities, and lessons learned are taken into account when deciding which research project proposal to approve.

The fourth factor is cost and time factor. This factor is important to assess the costs related to the research development, considering the time such new technology will take to become mature enough to be employed in a space project. This factor has a direct impact in the research projects assessment, bringing to light the necessary investment in new infrastructure, the reuse of the current installations, the need for new acquisitions and the project planning and schedule.

Page 141: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009268

Alves, M.C.B., Morais, A.M.F.

The last factor, risk factor, is intrinsically connected with costs and the system’s performance. The research project has to satisfy the needs and a set of specifications in a manner that it is cost effective and it conforms to a predetermined schedule with an acceptable risk, keeping a satisfactory performance. The inherent risk estimation should also consider the technology’s readiness level (DOD, 2001).

Figure 1 illustrates these five factors in a context of knowledge management exploration at IAE.

Research Projects

Research projects, as aforementioned, are the means to explore the scientific goals of the IAE Launcher Program and, as a result, to transition the resulting applicable technology into practice.

The selection of new research projects must make full use the existing knowledge by correlating separate sources and showing how they can effectively be exploited.

The research teams have to work toward a common goal, without duplicating efforts, conflicting subsystems design, or incompatible interfaces. These interfaces have to be well defined and controlled.

Another point to consider when establishing new research projects is the analysis of the critical success factors of the space mission in which good results will increase the necessary performance for its completion. This analysis will help reducing the principal investigators insistence on conducting state-of-the-art, complex and risky research projects (Bitten, Bearden, and Emmons 2005). Performance requirements that are based on chief technologies should consider their maturity (Mankins, 1995).

At IAE, the main investigators and their teams are organized in main research groups (RG) related to a space program knowledge area. These RGs usually have more than one main research topic (RT). The research projects (RP) are often connected to one or more RTs.

In order to manage future research projects and resulting technologies, a knowledge management system containing all the possible information on RGs, RTs and RGs was essential, not only to make the right decisions about the research strategic investment, but also to evaluate research project proposals.

With a decision support system based on knowledge management, much of the information available in the research proposal can be checked for consistency and reasonableness, in order to avoid, for example, the overloading of human resources and duplication of infrastructure.

A Committee comprised of two investigators from each main research area of the Launcher Program, is responsible for assessing the project proposals that are submitted.

The Committee determines whether these proposals will or not become selected research projects based on a final grade given by the Committee members. The project evaluation and final grade is based on the fulfillment of the evaluation criteria established in Table 1, which can be quantified and classified in 5 levels, as shown in Table 2.

Figure 1: The 5-factors and the knowledge management context.

Technology Transfer

Cost  and Time

HumanResource

ResearchRelevance

Knowledge management

Space Program

Project

EfficiencyEffectiveness

Risk

If all these factors are considered in the process of knowledge management exploration, and, if this relationship is well coordinated, it will bring effectiveness and efficiency for the space program.

One of the crucial points of knowledge management in a space program is to recognize the differences between the real space system and research projects and, consequently, implement scale-appropriate strategies to technology transfer.

Research projects are usually small in size and require small teams to carry out all the work while a real space system requires established subsystem responsibilities, team discipline, and the ability to manage and react to changes. These established differences imply that different methods are necessary to meet the needs of both; research projects and space systems. In order to improve the probability of the mission success, there are system engineering techniques that exclusively apply to space systems.

NASA makes implementation decisions and defines what is acceptable for a mission based on a traceability matrix (NASA, 2008) that captures the connection between the scientific objectives, the scientific measurement requirements, the functional requirements, the preliminary implementation strategy, and the preliminary performance of the proposed system.

Page 142: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 269

The management of knowledge and technologies in a space program

A list of selected projects is elaborated in which a priority is established for each selected project based on their final grade. The Brazilian Space Agency (AEB) provides the research grants for the selected projects based on this list.

During the research project development, a constant project monitoring is performed by tracking its main milestones and resulting products. The assessment of the projects’ generated technologies can be done in different ways, and it is usually quite challenging. One of NASA’s successful technology transfer measurement can be exemplified in one of their key measures of project success called Penetration Factor (McGill et al., 2006).

Once the research projects are approved, the Research & Development Coordinator (R&DC) is responsible for keeping track of each project’s status. There is a very simple computer system, called Project Tracking System, used for the communication between the R&DC and the projects’ manager, enabling the latter to register actions taken and to upload new deliverable releases of the project. This system can provide up-to-date projects data to the proposed knowledge management system.

The final assessment of the project results considers the space program’s goals and the requirements to produce a well-defined and useful set of measures to analyze the research project’s efficiency and efficacy. The measures of the project’s success will consist of a set of quantitative and qualitative information including the following data:

• Technology delivered;• Projects’ outcomes that generates successful

technology transfer for the space program;• Appropriate combination of basic and applied

research, if it is the case;• Descriptions of lessons learned;• Research accuracy in meeting the original selection

criteria to include technology transfer potential; • Interaction with other RGs;• Uncertainty of the results and conclusions.• Waiver of the results compared to the initial project

goals and theories;• Experiments with inconclusive or negative results;• Procedures that can guarantee the data exposition and

tests reproduction;• Generalization of the results for more general cases.

The evaluation of the resulting technology of a research project will also consider the technology readiness level (TRL). Therefore, categories of technology readiness levels from DOD 500.2R (Department of Defense, 2001) will be used for identifying the maturity of the technology. The resulting technology has to be at least level four in order to be employed by the Space

Table 1: Evaluation Criteria.

Research Projects Evaluation Criteria

1. Association with the the Institute’s high-priority strategic objectives

2. Relevance of the project proposal in the space systems research context

3. Relation with the main research areas established in call for submission of RP

4. Relevance and justification to conduct the RP

5. Relationship between the RP and the Institute’s knowledge base

6. Clarity of the project goals and expected results

7. Identification of the measures of the project success

8. Innovation of the research products

9. Scientific methodology (logical and coherent description to reach the final research goal)

10. Qualification of the research team

11. Reasonableness of cost and schedule

12. Needed infrastructure for the project development

13. Proposed infrastructure useful across multiple RP and RG´s

14. Direct applicability of the RP products to the target domain and across multiple projects and RG upon completion.

15. Usefulness of products

16. Technology Transfer Plan

17. Development of human resources

18. Identification of the environmental risks, if it is the case, and the procedures for minimizing them

19. Overall quality

Table 2: Evaluation Criteria Fulfillment.

Grade Classification

5 Complete

4 Good

3 Regular

2 Minimum

1 None

Page 143: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009270

Alves, M.C.B., Morais, A.M.F.

Systems Engineering. All of the project information will be collected and then inserted into the knowledge management system, which will also have information concerning Research Groups, Research Topics, and Research Projects. Since all of this information will be connected, it can be used to define new strategies and managing correlated technology.

A process to define strategies

The Institute’s main research areas of interest have to be constantly reviewed to reflect technology readiness, projects results, lessons learned and, most importantly, the Institute’s goals. These goals will drive the definition of strategies for future works and justify the investments and research in new technologies.

Once the main research areas are reviewed, the flow of technology from research to actual practice and its employment in the space program is achieved by carrying out research that is externally valid as well as accepted in the national and international space community.

Figure 2 presents a pictorial view of the proposed process that is being implemented in the Institute. This process is strongly based on the knowledge management system that was described as a sustention pillar to manage the correlated technologies within the Space Program.

regarding schedule, costs and achievements. The projects may also generate academic publications that will be evaluated by an external community. The Project Tracking System sends research project information to the Knowledge Management System, hence providing useful information to the system engineering and decision makers, according to IAE standards and strategic goals.

Another powerful drive of the process shown in Figure 2 is the collaboration amongst the investigators. The researchers should collaborate with each other in the Institute in order to identify communalities and maximize the expertise. Collaboration amongst the research groups is highly desirable and motivated by promoting easy access to up-to-date information about research groups and their members, research topics, research projects under development, project results, and the research groups’ scientific production.

This information is available in the knowledge management system. In order to have strong research groups, with a well defined structure, they are evaluated using a specific set of criteria also available in the system. The stronger the group, the more chances of obtaining grants for projects research.

The improvement of communication among the researchers, developers, practitioners is a must, which is crucial for the integration of scientific research to system engineering activities.

RESULTS AND DISCUSSION

The conception and development of a knowledge management system was necessary to provide the right information at the right time to make the right decisions in order to improve the Institute Space Program’s effectiveness and efficacy, as illustrated in the Figure 1.

The main idea of this system is to organize and to store information about the RGs, RTs, and RPs, capturing the experiences of investigators and research groups, the resulting technologies and their association with the Space Program. Additionally, the design, review, and implementation of both social and technological processes help to improve the application of knowledge in the Institute.

Figure 3 illustrates the schematic idea adopted in the conception of the knowledge management system. This Figure shows the important role of the research projects as generators of new technologies. The adequacy of the research project for the Space Program’s goals is given by its measures of success.

Knowledge ManagementSystem

AcademicPublication

RPs

generate

Project TrackingSystem

informs

Measures of project success

IAE standards

confirms

Space SystemsEngineering

RGs RTsTRLs Techno transferTRL four or higher

ExternalValidity

provides

ProblemStatement

ResearchExploration

drives

ResearchProject

sets controls

identifies

Strategic goals

SpaceProgram needs

reviews

influences

leads

informs

Figure 2: The proposed process for managing technologies.

As Figure 2 depicts, the process may be started by a problem statement, based on Space Program needs, on the Institute’s strategic goals or identified by an IAE need for standardization. Establishing a problem can start a new research that determines the research projects. Once the research projects are approved, the Project Tracking System starts tracking these projects

Page 144: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 271

The management of knowledge and technologies in a space program

Figure 3 also illustrates the hierarchy between projects, research topics, and groups, reinforcing the power of projects in this context.

Human ResourcesStrategic 

goalsKnowledge Management

System

Project TrackingSystem

Project results anddeliverablesTechnology Transfer

SpaceProgram needs

InfrastructureTechnologyReadiness

RGs

RTsTechnology

RPsMeasures of success

Figure 3: A schematic idea of the knowledge management system.

A database to implement the knowledge management system is now under development. It stores all the data related to the RGs, RTs, RPs, HRs and their relationship. The sets of criteria used to evaluate project proposals and research groups are also incorporated in the system, as well as the TRLs. The database was designed to keep up with the dynamic aspect of a knowledge management environment.

A user friendly interface was designed, providing an extensive set of possible queries by all of the staff at the Institute. The insertion of new information and its updating is easily done by authorized personnel and is certified by the R&DC.

The knowledge management system access is done via a web based browser, available in the Institute Intranet. All the system information can be extracted and visualized anytime by all the investigators and members of the organization, using the existing query mechanisms.

CONCLUSION

This paper presented an ongoing work at IAE to promote the best use of the available knowledge to manage the correlated technologies and employ them efficiently in the Brazilian Satellite Launcher Program. A process was defined to carry on these activities and a knowledge management system is under construction to support it. The knowledge management is based on the fact

that research projects are potential generators of new technologies for the Program. The multidisciplinary aspect of the Program results in research projects linked to specific research topics. These research topics are aggregated to the research groups.

One of the expected benefits with the implementation of the proposed process and the core knowledge management system is the assurance that the intellectual capabilities of the Institute are shared, preserved, and institutionalized.

The justification of the investments in research is another side of the coin. The Brazilian Space Agency will carefully look at past research projects and the effective application of their results to the Launch Program in order to concede grants for future research projects. This fact increases the responsibility of the Institute in promoting strategic research that adequately fits its space mission.

Suggestions from researchers to make improvements in the space program will be stimulated as well as the involvement of practitioners and developers in research design needs. This initiative will improve the research projects’ evaluation and selection and also grant them recognition.

Improvements will certainly be necessary in the process and in the original system conception to incorporate new requirements as the process increasingly matures, integrating more sophisticated techniques for data visualization, graphics, animation, 3-D displays, and data mining.

The final aspiration of this work is that the overall benefits of a knowledge management will be found in tomorrow’s new space programs.

ACKNOWLEDGEMENTS

This work was partially supported by the Brazilian Space Agency –AEB.

REFERENCES

Armstrong, M., 2006, “A Handbook of Human Resource Management Practice”, 10. ed., Kogan, London.

Bitten, R. E., Bearden, D. A., Emmons D. L., 2005, A Quantitative Assessment of Complexity, Cost, and Schedule: Achieving a Balanced Approach for Program Success, Proceedings of the 6th IAA [International

Page 145: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009272

Alves, M.C.B., Morais, A.M.F.

Academy of Astronautics] International Conference on Low- Cost Planetary Missions, Kyoto, Japan.

DOD, 2001. 5000.2R, “Mandatory Procedures for Major Defense Acquisition Programs (MDAPS) and Major Automated Information System (MAIS) Acquisition Program”, Department of Defense, Washington, DC.

Mankins, J. C., 1995, “Technology Readiness Levels: A White Paper”, NASA Advanced Concepts Office, Office of Space Access and Technology, Washington, DC Available at http://www.hq.nasa.gov/office/codeq/trl/ trl.pdf.

McGill, K., Deadrick, W., Hayes, J. H., Dekhtyar, A., 2006, “We Have a Success Story: Technology Transfer at the NASA IV&V Facility”, Proceedings of the 2006 International Workshop on Software Technology Transfer in Software Engineering, Shanghai, China.

Mckeown, M., 2008, The Truth About Innovation, Pearson Financial Times, New York, NY.

NASA, 2008, Standard PI-led Mission Announcement of Opportunity, National Aeronautics and Space Administration. Available at http://sso.larc.nasa.gov/ standardAOtemplate.pdf.

Sage, A.P., 1992. Systems Engineering, Wiley Series in Systems Engineering, Wiley, New York.

Wertz, J., Larsen, W., 1996, “Reducing Space Mission Cost”, Microcosm Press, Torrance, CA.

Page 146: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 273

Thesis abstractsThis section presents the abstract of most recent Master or PhD thesis related to aerospace technology and management

ELICERE: The elicitation process for dependability goals in critical computer systems – A case study for space application

Carlos Henrique Netto Lahoz Institute of Aeronautics and [email protected]

PhD Thesis in Electric Engineering at the Polytechnic School of the University of São Paulo, São Paulo, São Paulo State, Brazil, 2009.

Advisor: Prof. Dr. João Batista Camargo Júnior

Keywords: Dependability, Goals, Elicitation, Critical computer systems, Space project.

Abstract: The technological advances in electronic and software have been rapidly assimilated by computer systems, demanding new approaches for software and systems engineering to provide reliable products, under well-known quality criteria. In this context, requirements engineering has a strategic role in project development. Problems in the elicitation activity contribute to producing poor, inadequate or even non-existent requirements that can cause mission losses, material or financial disasters, premature project termination or promote an organizational crisis. This thesis introduces the elicitation process for dependability goals, called ELICERE, applied to critical computer systems based on a goal-oriented requirement engineering technique, called i*, and the safety engineering techniques HAZOP and FMEA, which will be applied for the identification and analysis of operational risks of a system. After creating the system models using i* diagrams, they are analyzed through guidewords based on HAZOP and FMEA, from which goals related to dependability are extracted. Through this interdisciplinary approach, ELICERE promotes the identification of goals that meet the quality requirements, related to dependability for critical systems, still in the project conception phase. The case study approach is based on a qualitative and descriptive single-case, using a computer system project of a hypothetical launching rocket, called V-ALFA. The ELICERE application in this space project intends to improve the requirement engineering activities in the computer system of the Brazilian Satellite Launch Vehicle, and also a way to explain how the ELICERE process works.

Variability management in software product lines using adaptive object and reflection

Luciana Akemi BurgareliInstitute of Aeronautics and [email protected]

PhD Thesis in Electric Engineering at the Polytechnic School of the University of São Paulo, São Paulo, São Paulo State, Brazil, 2009.

Advisors: Dr. Selma Shin Shimizu Melnikoff and Dr. Mauricio Gonçalves Vieira Ferreira

Keywords: Software product line, Variability, Adaptive object model, Reflection, Brazilian Satellites Launcher

Abstract: The Software Product Line approach offers benefits such as savings, large-scale productivity and increased product quality to the software development because it is based on software architecture reuse, which is more planned and aimed to a specific domain. Variability management is a key and challenging issue, since this activity helps identifying, designing and implementing new products derived from a software product line. This work defines a process for the variability management of a software product line. After modeling the variability, extracting the variants from use case diagrams and features, the next step is to specify the variability that was identified. Finally, the proposed process uses a variability mechanism based on adaptive object model and reflection as support in the creation of variants. The proposed process uses as case study the software system of a hypothetic space vehicle, the Brazilian Satellites Launcher.

Complex permittivity and permeability behaviors, 2-18GHz, of RAM based on carbonyl iron and MnZn ferrite

Adriana Medeiros GamaInstitute of Aeronautics and [email protected]

Page 147: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009274

Thesis Abstracts

PhD Thesis in Aeronautics and Mechanics Engineering, Physics and Chemistry in Aerospace Materials at the Technological Institute of Aeronautics, ITA, São José dos Campos, São Paulo State, Brazil, 2009.

Advisor: Prof. Dr. Mirabel Cerqueira Rezende

Keywords: Permittivity, Permeability, Microwave, Radar absorbing material.

Abstract: The main objective of this study is to contribute toward a comprehensive understanding the interaction of magnetic additives with the electromagnetic wave, in the microwave range (2-18 GHz) of radar absorbing materials (RAM). Thus, this work shows the electromagnetic behavior of different RAMs based on MnZn ferrite, carbonyl iron and their mixtures in a silicon rubber matrix. Emphasis is given to the complex permittivity and permeability parameters determination in the frequency range of 2 to 18 GHz and reflection loss measurements between 8-12 GHz. For the complex parameters determination a methodology based on coaxial transmission airline technique was established. The results show that the carbonyl iron interacts with the electric field of the incident wave through the storage component, since the electric field loss component is insignificant. The MnZn ferrite used shows variation of both storage and loss components with the increase of the additive concentration in the RAM and the frequency parameters. Considering the permeability, it is verified that the RAM sample based on carbonyl iron presents the highest values (1.0 to 2.2), in other words, this additive interacts more intensely with the wave magnetic field than with the ferrite (0.7 to 1.8). The measurements of the Reflection losses of RAM processed with the pure additives as well as with their mixtures present good results (70 - 99%). It is also observed that these samples behave as the resonant RAM type. The results also confirm that the microwave attenuation is dependent on the magnetic additive proportion, sample thickness and frequency. In the comparative studies of reflection losses, the experimental measurements and simulations show good agreement, suggesting that the simulation is an adequate support tool for optimizing these materials, diminishing costs and time of RAM processing.

Investigation of the distribution of the film cooling for the liquid rocket engine – LRE with 75 kN thrust

Luís Antonio SilvaInstitute of Aeronautics and [email protected]

Master’s Thesis in Engineering, defended at the Technological Institute of Aeronautics, ITA, São José dos Campos, São Paulo State, Brazil, 2009

Advisor: Prof. Dr. Amilcar Porto Pimenta

Keywords:. Investigation, Film cooling, Rocket engine, Liquid propulsion.

Abstract: This work presents a methodology for analyzing a liquid rocket engine cooling system and more specifically an investigation of the film cooling method applied to a 75 kN thrust, kerosene and liquid oxygen rocket engine. In the case study, the engine cooling film is created by the fuel injected by peripheral injectors. Two possibilities were analyzed: in the first it was assumed that 50 per cent of the fuel injected by the peripheral injectors became part of the cooling film; in the second, the cooling film is constituted by only the fuel that flows on the walls. The injection system of an engine under development in IAE (L15) was used in cold tests to validate theoretical and empirical design data obtained by experts from Moscow Aviation Institute (MAI) and to refine some design parameters used in engines under development in IAE.

Adjusting the vertical profile of wind data obtained from anemometric tower and radiossounding in the “Alcântara Launch Center”

Ricardo Costa LeãoInstitute of Aeronautics and Spaceleã[email protected]

Master’s Thesis in Engineering, defended at the Technological Institute of Aeronautics, ITA, São José dos Campos, São Paulo State, Brazil, 2009

Advisors: Prof. Dr. Íria Fernandes Vendrame and Prof. Dr. Gilberto Fernando Fisch

Keywords: Matching, Vertical wind profiles, Cubic splines.

Abstract: This work aims to adjust (“match”) two different vertical wind profiles, one from an anemometric tower with 6 levels (6.0, 10.0, 16.3, 28.5, 43.0 and 70.0 m) obtained by measurements direct from the anemometer, and the other from radiosoundings with wind determination measured by GPS technique up to 500m, in vertical layers of 50m. The result was a single profile obtained using the cubic spline interpolation method, detection of average deviation

Page 148: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 275

Thesis Abstracts

(“bias”) between the profiles and adjustment (“matching”) of the profiles, avoiding abrupt changes in the average profile. A real case study was conducted with determination of the trajectory of the rockets launched from the “Alcântara Launch Center” attaining a result that the point of impact strikes the dispersion field with less than 10 per cent error, relative to its radius using the settings profile.

Radar absorbing materials based on thin films processed by physical vapor deposition technique

Viviane Lilian SoetheTechnological Institute of [email protected];[email protected]

Thesis submitted for PhD degree in Physics at Technological Institute of Aeronautics, ITA, São José dos Campos, São Paulo State, Brazil, 2009.

Advisors: Dr. Mirabel Cerqueira Rezende and Dr. Evandro Luis Nohara

Keywords: RAM, Thin films, Radar absorbing materi als, PVD.

Abstract: This work shows the study of the production of metal thin films, with nanometric thicknesses, by Physical Vapor Deposition (PVD). Triode magnetron sputtering, electron beam and resistive evaporation techniques were used for the deposition of Al, Ni, Ti, Cu, C, CNx and AlxFey e NixTiy alloys. These materials were deposited on polymeric substrates of poly(ethylene terephtalate), with thicknesses of around 0.1 and 0.01 mm. Characterization of the films involved different aspects, such as: thickness, composition and the electromagnetic wave attenuation behavior, in the frequency range of 8–12 GHz. The correlation of the data obtained aimed to evaluate the performance of the nanofilms as Radar Absorbing Materials (RAM). The main result may be cited as the success of the PVD technique used for metal thin film production, being much lighter than the conventional absorbers, and with an excellent RAM behavior in the microwave range. Metal nanofilms are characterized as presenting thickness values below skin depth and dielectric losses. The experimental results show also that the film’s performance in microwave attenuation is affected by different factors, such as the deposition technique used, the metal type and the film thickness. Among the results obtained, we may mention: Al films with attenuation values of 99 per cent at the frequency of 9.5 GHz, AlxFey and NixTiy films, processed by resistive evaporation technique, with attenuation values

of 70 per cent in broadband (8-12 GHz) and also multilayer structures obtained by adequate combination of nanofilms, with better RAM performance.

Synthesis, doping and characterization of furfuryl alcohol resin and phenol-furfuryl alcohol resin aimed at the optimization of glass-like carbon processing

Silvia Sizuka OishiSão Paulo State [email protected]

Thesis submitted for Masters in Mechanical Engineering at São Paulo State University, Guaratinguetá, São Paulo State, Brazil, 2009.

Advisors: Dr. Edson Cocchieri Botelho and Dr. Mirabel Cerqueira Rezende

Keywords: Glassy-like carbon, Doping, Furfuryl alco hol resin, Phenol-furfuryl alcohol resin, Physicochemical properties.

Abstract: Given the growing importance of glassy carbon material in strategic areas, due to its intrinsic characteristics, such as lower density and good thermal and electrical conductivity values, several studies have been observed looking for new polymeric precursors and tighter processing parameters. Similarly, this study aims to establish synthesis routes for furfuryl and phenol-furfuryl alcohol resins and their doping with copper particles, in order to produce reticulated glassy carbon (RGC) electrodes. Within this context different formulations of furfuryl and phenol-furfuryl alcohol resins were synthesized by variation of the monomers – furfuryl alcohol, phenol and formaldehyde, respectively. Confirmation of the success of the synthesis was undertaken using FT-IR spectroscopy, gas chromatography, thermal analyses by differential scanning calorimetry (DSC) and carbon yield content measurements that present results between 27 and 45 per cent of carbon. After this, the specimens were doped with copper colloidal particles. The doped and non doped resins were catalyzed, impregnated in polyurethane (PU) foams and carbonized, in order to obtain the reticulated glassy carbon. Optical and Scanning Electron Microscopy analysis show the homogeneity of PU foams impregnation and uniform texture of RGC specimens. Compression results present the best values for RGC resulting from the carbonization with furfuryl alcohol acid resin (0.55 MPa).

Page 149: Vol.1 N.2 - Journal of Aerospace Technology and Management
Page 150: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 277

INSTRUCTIONS TO THE AUTHORS

Scope and editorial policy

The Journal of Aerospace Technology and Management is the official publication of Institute of Aeronautics and Space (IAE) of the Department of Aerospace Science and Technology (DCTA), São José dos Campos, São Paulo State, Brazil.

The Journal is published twice a year (June and December) and is devoted to research and management on different aspects of aerospace technologies. The authors are solely responsible for the contents of their contribution. It is assumed that they have the necessary authority for publication.

When submitting the contribution the author should classify it according to the area selected from the topics.

• Acoustics• Aerodynamics• Aerospace Systems• Applied Computation• Automation • Chemistry• Defense• Electronics

• Management Systems• Materials• Mechanical Engineering• Meteorology• Propulsion• Structures • Vibration

The submissions, except thesis and book reviews, will be peer reviewed by three Editorial board members and selected for publication according to the editorial policy of the journal. Copyrights on all material published belong to IAE. Permission must be requested prior to use.

Mandatory requirements

All papers must include: type of contribution (review article, original paper, short communication, case report, book reviews or theses), title, authors’ names, electronic addresses and affiliations, abstract, key words (3 to 6 items that should be based on NASATHESAURUS V.2 - Access Vocabulary), and indication of the author responsible for correspondence.

Contents

Editorial

Any researcher may write the editorial on the invitation of the editor-in-chief. The article should not exceed two pages.

Review articles

They should cover subjects falling within the scope of the journal. These contributions should be presented in the same format as a full paper, except that they should not be divided into sections such as introduction, methods, results and discussion. However, they must include a 150 to 200 word abstract, key words, concluding remarks, acknowledgment and references. The article should not exceed 18 pages.

Technical papers

These articles should report the results of original research and need to include: a 150 to 200 word abstract, key words, introduction, methods, results and discussion, acknowledgment, references, tables and/or figures. The article should not exceed 12 pages.

Communications

They should include a 150 to 200 word abstract, key words, tables and/or figures acknowledgment and references. The communication should not exceed 8 pages.

Thesis abstracts

The journal welcomes Masters and PhD thesis abstracts for publication.

Page 151: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and ManagementV. 1, n. 2, Jul. - Dec. 2009278

Paper Submission

Manuscript should be written in English or Portuguese and submitted eletronically. See the instructions on www.jatm.com.br/papersubmission .

References

References should be cited in the text by giving the last name of the author(s) and the year of publication. Either use “Recent work (Smith and Farias, 1997) or “ Recently Smith and Farias (1997). With four (4) or more names, use the form “Smith et al. (1997)”. If two or more references would have the same identification, distinguish them by appending “a”, “b”, etc., to the year of publication.

Acceptable references include journal articles, numbered papers, dissertations, thesis, published conference proceedings, preprints from conferences, books, submitted articles, if the journal is identified, and private communications.

References should be listed in alphabetical order, according to the last name of the first author, at the end of the article. Some sample references follow:

Bordalo, S.N., Ferziger, J.H. and Kline, S.J.,1989, “The Development of Zonal Models for Turbulence”, Proceedings of the 10th Brazilian Congress of Mechanical Engineering, Vol.1, Rio de Janeiro, Brazil, pp. 41-44.

Coimbra, A.L., 1978, “Lessons of Continuum Mechanics”, Ed. Edgard Blücher, S.Paulo, Brazil, 428 p.

Clark, J.A.,1986, Private Communication, University of Michigan, Ann Harbor.

Silva, L.H.M.,1988, “New Integral Formulation for Problems in Mechanics” (In Portuguese), Ph.D. Thesis, Federal University of Santa Catarina, Florianópolis, S.C., Brazil, 223 p

Soviero, P.A.O. and Lavagna, L.G.M.,1997, “ANumerical Model for Thin Airfoils in Unsteady Motion”, RBCM- J. of the Brazilian Soc. Mechanical Sciences, Vol.19, No. 3, pp. 332-340

Sparrow, E.M., 1980a, “Forced Convection Heat Transfer in a Duct Having Spanwise-Periodic Rectangular Protuberances”, Numerical Heat Transfer, Vol.3, pp. 149-167.

Sparrow, E.M., 1980b, “Fluid-to-Fluid Conjugate Heat Transfer for a Vertical Pipe-Internal and External Natural Convection”, ASME Journal of Heat Transfer, Vol.102, pp. 402-407.

Associação Brasileira de Normas Técnicas, 2002, NBR6032: “ Abreviação de títulos de periódicos e publicações seriadas”, Rio de Janeiro, Brazil 14p.

BRASIL,1993, “Relatório de atividades”, Ministério da Justiça, Brasília,D.F. Brazil 28p.

Garcia,A.,2005, “Estudo Preliminar de Concepção de Performance de Veículos Lançadores Referentes aos Estudos do Grupo de Trabalho VLS-2010”, IAE, São José dos Campos, Brazil. (ASE-RT-006-2005)

EMBRAPA, 1995, “Unidade de Apoio,Pesquisa e Desenvolvimento de Instrumentação Agropecuária”. Medidor digital multissensor de temperatura para solos, BR n. PI 8903105-9, 26 Jun.1989,30 maio 1995.

Illustrations

All illustrations (line drawings, photographs and graphs) should be submitted, preferably in JPG, TIFF or XLS format, with good definition (1 to 2 Mega Pixels).

References should be made in the text to each illustration. Explanations should be given in the figure legends, so that illustrations are kept clean.

Tables

Authors should take notice of the limitations set by the size and layout of the journal. Therefore, large tables should be avoided. All tables must be mentioned in the text.

Sponsors

This publication is sponsored by Institute of Aeronautics and Space (IAE).

Page 152: Vol.1 N.2 - Journal of Aerospace Technology and Management

Journal of Aerospace Technology and Management V. 1, n. 2, Jul. - Dec. 2009 279

Errata

In the “Table of Contents”, Vol.1, Nº.1, Jan. – Jun., 2009, pp. 3, the author´s name: Marinho, L. P. B should be read as: Pires, L. B. M.

In the work “Historical Review and Future Perspectives for the Pilot Transonic Wind Tunnel of IAE”, Vol.1, Nº.1, Jan. – Jun., 2009, pp 20, in “Introduction” the sentence: test section of 2 m x 2.4 m and Mach number from 1.2 to 1.4 should be read as: test section of 2 m x 2.4 m and Mach number from 0.2 to 1.4.

In the work “Avaliação de agente de ligação aziridínico por meio de técnicas de análise química e instrumental” , Vol.1, Nº.1, Jan. – Jun., 2009, pp. 55, the author´s name: Elizabeth E. Mattos should be read as: Elizabeth C. Mattos

In the work “Studies using wind tunnel to simulate the Atmospheric Boundary Layer at the Alcântara Space Center”, Vol.1, Nº.1, Jan. – Jun., 2009, pp. 91, the author’s name: Luciana, P. B. Marinho should be read as: Luciana, B. M. Pires. Also, in the headings of pages 92, 94, 9 6 and 98, the author’s name: Marinho, L. P. B should be read as: Pires, L. B. M.

In the work “Brazilian Air Force aircraft structural integrity program: An overview”, Vol.1, Nº.1, Jan. – Jun., 2009, pp. 107, the author´s name: Ribeiro Fabricio N. should be read as: Fabricio N. Ribeiro.

Page 153: Vol.1 N.2 - Journal of Aerospace Technology and Management