Guide Inverse Problems in Vibration

Free download. Book file PDF easily for everyone and every device. You can download and read online Inverse Problems in Vibration file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Inverse Problems in Vibration book. Happy reading Inverse Problems in Vibration Bookeveryone. Download file Free Book PDF Inverse Problems in Vibration at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Inverse Problems in Vibration Pocket Guide.

Buy Softcover. FAQ Policy. About this book The last thing one settles in writing a book is what one should put in first. Show all. Jacobian Matrices Pages Gladwell, G. Show next xx. Read this book on SpringerLink. Chapter 2. Full statement of the vibrational problem Pages Chapter 3. Consideration of the mathematical model for molecular vibration analysis.

Direct and inverse problems Pages Chapter 4. Vibrational problem in internal coordinates. Use of the redundant coordinate system Pages Chapter 5. Vibrational problem in symmetry coordinates Pages Chapter 6. Ill-posed problems and the regularization method. Regularizing algorithms for constructing force fields of polyatomic molecules on the base of experimental data Pages Chapter 7. Numerical methods Pages This equation, which appears in three related forms, is the governing equation for the vibrating string and rod. The Chapter describes the classical approach, as well as some recent techniques that are more readily adaptable to computation.

Chapter 12 discusses families of isospectral continuous systems. Chapter 14 is a short too short study of inverse nodal problems. While it is di! There is now a considerable body of research, due primarily to McLaughlin and Hald, that focuses on what nodal data is su! Section The study of inverse problems in vibration provides a clear example of this connectedness.

I used the translation by W. My copy is dated 26th April and contains an 8d old pence ticket for the London Transport bus No. His comments on the place of reason, heart and will in seeking a solution of the problem, though sometimes enigmatic, are as deep and relevant in as they were in The caption for Chapter 11 reminds me that many people have contributed to this book.

In addition to these, I have freely taken from papers by numerous colleagues worldwide, as referenced in the bibliography. I thank them for pointing out many errors and shortcomings, some of which I have managed to correct. The book was typed by Tracy Taves. Thank you for your stamina and your attention to detail. Colin Campbell helped us out with his understanding of the idiosyncracies of LaTeX.

Finally, I acknowledge the patience and understanding of my wife, Joyce, who saw me immersed in books in my study for years on end. George Carrier once remarked that the aim of mathematics is insight, not numbers. Since matrix analysis now has an established position in Engineering and Science, it will be assumed that the reader has had some exposure to it; the presentation in the early stages will therefore be brief.

The reader may supplement the treatment here with standard texts. A matrix is a rectangular array of real or complex numbers together with a set of rules that specify how the numbers are to be manipulated. The set of all real matrices, i. We write 1 Blaise Pascal lived among the French intelligentia, and in that context it was a bad sign; one should be known for more than just a book one had written. If you met someone you knew who had written a book, you would mention it immediately! The matrix 6 9 6 8 3 is symmetric.

Recommended for you

The square matrix A is said to be diagonal if it has non-zero entries only on the principal diagonal running from top left to bottom right. Two matrices A and B can be multiplied in the sense AB only if the number of columns of A is equal to the number of rows of B. This holds even if the matrices are square see 1. This result is su! Consider the element in row l, column m on each side of 1. Show that if d33 is changed then the only possible matrix B would be the zero matrix. Are these two matrices equal? We note that each product in the sum contains just one element from each row and just one element from each column of A.

Matrix Analysis 7 respectively. Lemma 1. If the two rows columns are interchanged, then, on the one hand, det A is unchanged, while on the other, Lemma 1. Each term in the expansion is multiplied by n. This follows from Lemmas 1. This follows directly from Lemma 1. We may now prove Theorem 1. Thus for the A in 1.

Thus for A in 1. Thus we write 1. These two results, 1. Thus equation 1. We prove the result for the columns. That for the rows may be proved likewise. We will prove it by induction on q. The theorem is a corollary of Theorem 1. By Theorem 1. Theorem 1. The logical negative of this statement is that if A is singular it does not have an inverse. We now prove the converse. In numerical linear algebra the starting point of almost all the procedures for solving linear equations such as 1.

This is a systematic reduction of an array dlm to usually upper triangular form by subtracting multiples of one equation from another. Matrix Analysis 13 Exercises 1. Show that if A is upper lower triangular, i. The problem 1. The eigenvalue theory for general, i. See Ex. The quantity xW Ax is a scalar. In many physical applications the kinetic energy and the potential energy of a mechanical system may be expressed as quadratic forms in the generalised velocities or displacements, respectively. The kinetic energy of a system is always positive, unless all the generalised velocities are zero.

Now return to equation 1. We discuss this further in Chapter 2. In the notation of Section 1. Equation 1. This means that equation 1. Premultiplying equation 1. An important corollary of this result is Theorem 1. A simple treatment of the classical techniques may be found in Bishop, Gladwell and Michaelson [33].

A comprehensive account of modern techniques is given by Golub and Van Loan []. The classical treatise on the symmetric eigenvalue problem is Parlett []. We are concerned only with the qualitative properties of eigenvalues. Exercises 1. Verify the conditions given in Theorem 1. Generalise this result. This is a particular case of a general result, see e.

Where does the argument used in the proof of Theorem 1. This is the kind of di! Chapter 2 Vibrations of Discrete Systems Our nature consists in motion; complete rest is death. In this chapter we shall give a brief account of those parts of the theory that will be needed for the solution of inverse problems.

The whole lies in a straight line on a smooth horizontal table and is excited by forces Iu w q1. It is the simplest possible discrete model for a rod vibrating in longitudinal motion. Equations 2. There is a third system which is mathematically equivalent to equations 2. This is the transverse motion of the string shown in Figure 2. But note that the string shown in Figure 2. In order to simulate a string with a free end, the last segment of the string must be attached to a massless ring that slides on a smooth vertical rod.

Vibrations of Discrete Systems u1 k1 21 u2 k2 m1 un kn m2 mn Figure 2. Note that both M and K are symmetric; this is a property shared by the matrices corresponding to any conservative system. In this particular example the matrix M is diagonal while K is tridiagonal, i. Equation 2.

Exercises 2. Consider the multiple pendulum of Figure 2. Transverse vibration of a beam Figure 2. The systems considered in Sections 2. In such a model, shown in Figure 2. In the uth element, shown in Figure 2. When the end conditions 0 are imposed there will be, as before, only q coordinates u q1. Vibrations of Discrete Systems 27 Figure 2. Vibrations of Discrete Systems 29 and equation 2.

Top Authors

The sign properties may be deduced from 2. On the basis of these examples we now pass to the general case. The restrictions on the matrix K are slightly less severe since, although the strain energy will always be positive or zero, it will actually be zero if the system has a rigid-body displacement. Notice, for example, that the Y of 2. Use equations 2.

Use the form 2. The vibrations of a membrane and of an acoustic cavity are mathematically similar: both involve just one scalar quantity, the transverse 2. We are not particularly interested in the magnitudes of the coe! First we investigate the elements of Kh. The membrane is replaced by an assembly of triangles 4l with vertices Sl and edges Sl Sm as shown in Figure 2. Now it is found Zhu [], Gladwell and Zhu [] that if the angles between the normals to the faces are all obtuse, as shown in Figure 2.

When q has the form 2. On the other hand, the last of equations 2. Thus the end conditions for the recurrence 2. Consider the beam system of Figure 2. A physically more acceptable discrete approximation of a beam is considered in detail by Gladwell [] and Lindberg []. Vibrations of Discrete Systems 39 when q is expressed in terms of the principal coordinates.

For equation 2. We shall now use the principal coordinates to obtain the response of a system to sinusoidal forces. Substitute 2. Use the orthogonality of the xl q1 w. We shall state the proof in a number of ways because each is instructive. The complete 42 Chapter 2 set of q equations which state that Y0 W0 is stationary w. Now express the energies in terms of principal coordinates.

In order to analyse equation 2. The coe! This means that a constraint 2. Figure 2. The importance of the latter assumptions is that the denominator of 2. There may be more than one such minimizing vector, but there is always at least one, which we denote by x1. The inequality 2. We may now extend this analyses to higher eigenvalues by using 2. Examine the arguments in Sections 2.

Passar bra ihop

Chapter 3 Jacobi Matrices Let no one say that I have said nothing new; the arrangement of the subject is new. We will start by considering systems like that for the system in Figure 2. At the end of the section we will show that many of the results may be generalised to apply to systems like that in 2. The most important property of the eigenvalues of such systems is that they are simple, i. We shall now establish these and other results. Throughout the next few Chapters, we redevelop analysis originally established by Gantmacher and Krein [98]. Their book was republished in First, we suppose that M is a strictly positive diagonal matrix, as in 2.

We now prove: 3. Jacobi Matrices 51 Theorem 3. The latter part of property 2 now follows directly from 3. Theorem 3. On the other hand, for su! The second part of the theorem now follows immediately. Corollary 3. By Corollary 3. This theorem is usually stated in the form: the eigenvalues of successive principal minors interlace each other. In this section we outline some of the basic properties of orthogonal polynomials. To introduce the concept formally we let Pq denote the linear space of polynomials of order q, i.

The Gram-Schmidt procedure does not provide a computationally convenient means for computing the tl ; instead we use Forsythe [90]. It may therefore be expressed in terms of the linearly independent - see Ex. Thus 3. Returning to equation 3. This means that the weights 3. Exercises 3. We return to the analysis of Section 3. Now return to Theorem 3. We now investigate the nodes of this line, i. Table 3. Figure 3. We now establish an identity which will enable us to prove further results concerning the eigenvectors.

Jacobi Matrices 59 Table 3. Since, by Theorem 3. On the other hand vt A 0, which, when used with 3. Jacobi Matrices 61 while Theorem 3. These two inequalities imply that the only possible ordering of the nodes is 0? See Gladwell a [] for some related results. Show that if the matrix J of 3. Under these conditions we may prove that the solutions of 3.

In particular, we can show that the eigenvalues of 3. To obtain these results we need to return to the analysis in Section 3. Thus Theorem 3. The proof of Theorem 3. The last term in 3. The remainder of the proof of Theorem 3. We need to make small changes in the proof of Theorem 3. We may make similar changes to the proofs of Theorems 3.

Make appropriate changes in the proofs of Theorems 3. It appears that his primary interest was in the qualitative properties of the solutions of, and the inverse problems for, the Sturm-Liouville equation see Chapter 10 , and the discrete problems were studied because such problems were met in any approximate analysis of Sturm-Liouville problems.

Consider the simple system shown in Figure 4. The theory presented in this Chapter provides various generalisations of this analysis to a lumped-mass system made up of q masses. The Chapter falls into three parts: a discussion of inverse problems for 4. Inverse Problems for Jacobi Systems 65 a Jacobi matrix; mass-spring realisations of these problems; generalisations and variants of these problems.

Exercises 4. Show that for the system of Figure 4. The basic theorem is 66 Chapter 4 Theorem 4. We recall Ex. The theorem is at once an existence there is We shall prove existence by actually constructing a matrix, and will do so by using the so-called Lanczos algorithm; the algorithm demonstrates that J is unique. This algorithm has the advantage that numerically it is well conditioned.

An independent proof that the matrix is unique is left to Ex. But this means that the columns of X, like the columns of U, are orthonormal. Now we proceed to rewrite the eigenvalue equations 4. The set of equations 4. Inverse Problems for Jacobi Systems 67 Take this equation column by column. This procedure is called the Lanczos algorithm; see Lanczos [], Golub [], Golub and Van Loan [] and Kautsky and Golub []. Actually, what we have described is an inverse version of the original Lanczos algorithm. Rewrite the procedure described in equation 4. Suppose A 5 Mq. If A is symmetric, i.

Friedland and Melkman [94] discuss the inverse eigenvalue problem in the context of non-negative matrices. If A 5 Vq , the matrix obtained by deleting the lth row and column of A is called a truncated matrix. He proved that there is at most one matrix J with the required property. Hochstadt [] attempted to construct this unique Jacobi matrix, but he did not show that his method would always lead to real values of the codiagonal elements el. Gray and Wilson [] presented an alternative, inductive construction of J.

An independent uniqueness proof was given by Hald []. In this section we shall present two methods for constructing J. The second, which will later be generalised to inverse problems for band matrices, relies on the Lanczos algorithm described in Section 4. Having found the weights zl by using 4. The only major di!

Inverse problems in vibration - G M L Gladwell - Häftad () | Bokus

In seeking to overcome this di! It reverses the order of the rows and the columns of J, i. We prove Theorem 4. For once we step out of sequence, and use the notation we will introduce in Section 6. The second method of constructing J is due to Golub and Boley []. We can carry out the analysis for an arbitrary symmetric matrix 4. The condition that i be stationary yields 4. The analysis of Section 2.

On multiplying 4. The interlacing condition ensures that the right hand side of 4. This equation thus yields x1. We stress the importance of the analysis in equations 4. There is a third inverse problem which appears in a number of contexts. If we know that A is a Jacobi matrix then, of course, we can use the Lanczos algorithm to determine it. A matrix A is said to be persymmetric if it is symmetric, and also symmetric about the second diagonal, the one going from top right to bottom left.

Now we need only one spectrum, not two. But then Theorem 4. See Hochstadt [] for another variant of this inverse eigenvalue problem. Stage i was discussed in Section 3. Given the spectra of the systems , we construct in Figure 4. By using the explicit form of K in equation 2. We need to be sure that d so calculated will be a strictly positive vector. We use induction. We may now return to equation 4. Thus the solution of equation 4.

The reconstruction from the spectra of a and c proceeds along similar lines; we merely renumber the masses starting from the right Ex. Using x and y we may write the solution of 4. The system in Figure 4. The chosen parameter is merely a scaling factor; the total mass, or alternatively one individual mass, say pq , would determine it. If we take pq as known, then equation 4.

The reconstruction is complete. The third system is free-free, as shown in Figure 4. It is Theorem 4. The proof is straightforward; see Ex. Now we may complete the reconstruction. Reconstruct the system of Figure 4. Use the solution 4. Provide a constructive proof of Theorem 4. Show that there is a one-parameter family of systems, each member of which has the stated eigenvalues. Inverse Problems for Jacobi Systems 4. We have renumbered the masses so that the spring is attached at p1. In this case it is easier to work initially with the original equation 4.

Inverse Problems for Jacobi Systems 83 The interlacing condition 4. Finally, multiplying 4. Now we use x1 in the Lanczos algorithm, and the untangling procedure as before. There are still more ways in which to obtain second spectrum, for which see Nylen and Uhlig a [], Nylen and Uhlig b []. Ram [] supposes that the system of Figure 4. He makes use of some simple but powerful results found in Ram and Blech [].

The interlacing of these two spectra may thus be interpreted as the interlacing of the poles and zeros of the response function, a result which is well known in control theory. The result of Section 4. See Gladwell and Gbadeyan [] for an alternative treatment.

An experimental - theory study of the problem of reconstructing a springmass system from frequency response data for an actual system may be found in Gladwell and Movahhedy [] and Movahhedy, Ismail and Gladwell []. We shall now consider some physical problems relating to persymmetric matrices. Figure 4. Now the odd-numbered symmetrical modes will be the modes of the left-hand half with pq 2 at the end and free there, as in Figure 4.

Learning to Solve Inverse Problems in Imaging - Willet - Workshop 1 - CEB T1 2019

Since one spectrum is insu! We therefore assume Gladwell [] that M can be written in terms of K: 4. Inverse Problems for Jacobi Systems 87 where D is an as yet undetermined diagonal matrix with positive entries, and f is an arbitrary positive number. Since K has negative codiagonal, M will have positive codiagonal.

Suppose that 4. To reconstruct J we need a second spectrum. If the eigenvalues of 4. This can be done exactly as in Section 4. We exclude the free-free condition at this stage. Within themselves these sets of eigenvalues must be distinct. There are two cases. There can be more than one such pair. To analyse the situation we suppose that the eigenvalue equation 4. Thus B may be reconstructed uniquely. Any one of these values may be substituted into 4. Using 4. Combine these weights with those corresponding to the distinct eigenvalues, and compute B and C.

Express the eigenvalue problem for J in 4. In equation 4. Equation 4. The mass-spring models considered in this chapter are very similar to the shear building model used extensively by Takewaki and his coworkers. Among the original papers most closely related to the concerns of this chapter are the following: Takewaki and Nakamura [], Takewaki, Nakamura and Arita [] and Takewaki and Nakamura [], Takewari []. In this chapter we will consider some slightly more general problems but must admit that there are still only a few problems that we have been able to solve.

The special feature of a Jacobi matrix is its structure: it is tridiagonal, with strictly negative codiagonal. The structure of the matrix J in equation 4. The structures of K and M, in turn, derive from the structure of the system, an in-line mass system, to which they belong. The natural tool for describing and analysing the structure of a system is graph theory. This is not the place to prove any theorems in graph theory, but it is useful to introduce some of the basic concepts.

A graph G is a set of vertices, connected by edges. The set of vertices is called the vertex set, and is denoted by V; the set of edges is called the edge set, E. Figure 5. This is actually an example of a simple, undirected graph. The graph is undirected because there is no preferred 93 94 Chapter 5 direction associated with an edge. Henceforth, the terms graph will be used to mean a simple, undirected graph.

The path is clearly one of the simplest graphs. Inverse Problems for Some More General Systems 95 A symmetric bordered diagonal matrix B has a star on q vertices as its associated graph. The underlying matrix is a ring on q vertices as shown in Figure 5. Note that the intersections of the diagonals in Figure 5. The graphs shown in Figure 5. Renumbering the vertices of a graph leads to a rearranging of the rows and of the columns of any symmetric matrix based on that graph. When a graph is disconnected, it may be partitioned, as in Figure 5.

Then we can always rearrange the numbering, as in b so that vertex numbers in any one connected subgraph form a consecutive sequence. If it is reducible, then it can be transformed to the form 5. Note: The concepts of connectedness of a directed graph, and the corresponding concept of irreducibility of a general not necessarily symmetric matrix, are more complex than those described here. See Horn and Johnson [] Section 6. Now we may state the general result. Theorem 5. It is easy to check that if a spring other than n1 is removed from a spring mass system such as that in Figure 4.

A tree is a special kind of connected graph: one which has no circuits. Now there is a unique chain of edges connecting any one vertex to any other. The path and the star are both trees, but a ring, see Figure 5. A connected graph has one or more spanning trees. If G is a connected graph with vertex set V, then a spanning tree S of G is a maximal tree with the vertex set V; if any more edges in E were added to S then it would cease to be a tree: it would have a circuit.

Inverse problems in vibration

It may be proved that all the spanning trees of a given graph G have the same number of edges. As stated in Section 1. The transformation is called an equivalence transformation. It is a special equivalence relation Ex. In general, an equivalence transformation will transform a symmetric pencil into an unsymmetric pencil. Those which preserve symmetry are characterised by 5. Equations 5. It is di! Instead, we use the fact that a product of orthogonal matrices is itself orthogonal Ex.