Adjacency matrix
From Academic Kids

In mathematics and computer science, the adjacency matrix for a finite graph <math>G<math> on n vertices is an n × n matrix where the nondiagonal entry a_{ij} is the number of edges joining vertex <math>i<math> and vertex <math>j<math>, and the diagonal entry a_{ii} is twice the number of loops at vertex <math>i<math>. There exists a unique adjacency matrix for each graph, and it is not the adjacency matrix of any other graph. In the special case of a finite, simple graph, the adjacency matrix is a (0,1)matrix with zeros on its diagonal. If the graph is undirected, the adjacency matrix is symmetric.
For sparse graphs, that is graphs with few edges, an adjacency list is often the preferred representation because it uses less space. An alternative matrix representation for a graph is the incidence matrix.
The relationship between a graph and its adjacency matrix is studied in spectral graph theory.
Examples
The adjacency matrix for the following vertex labeled graph
 Missing image
6ngraph2.png
image:6ngraph2.png
is
 <math>\begin{pmatrix}
2 & 1 & 0 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ \end{pmatrix}.<math>
Properties
The adjacency matrix of an undirected graph is symmetric, and therefore has a complete set of eigenvalues and orthogonal eigenvector basis. The set of eigenvalues of a graph is the spectrum of the graph.
Suppose two directed or undirected graphs G_{1} and G_{2} with adjacency matrices A_{1} and A_{2} are given. G_{1} and G_{2} are isomorphic if and only if there exists a permutation matrix P such that
 PA_{1}P^{ −1} = A_{2}.
In particular, A_{1} and A_{2} are similar and have therefore the same minimal polynomial, characteristic polynomial, eigenvalues, determinant and trace. These can therefore serve as isomorphism invariants of graphs. Given the two adjacency matrices, it is possible to reconstruct the permutation matrix used: see permutation matrix for details.
Note, however, that the converse is not true: two graphs may possess the same set of eigenvalues but not be isomorphic (one cannot 'hear' the shape of a graph). The multiplication with the permutation matrix can be visualized as a relabeling of the vertices.
If A is the adjacency matrix of the directed or undirected graph G, then the matrix A^{n} (i.e. the matrix product of n copies of A) has an interesting interpretation: the entry in row i and column j gives the number of (directed or undirected) paths of length n from vertex i to vertex j.
The matrix I − A (where I denotes the nbyn identity matrix) is invertible if and only if there are no directed cycles in the graph G. In this case, the inverse (I − A)^{−1} has the following interpretation: the entry in row i and column j gives the number of directed paths from vertex i to vertex j (which is always finite if there are no directed cycles). This can be understood using the geometric series for matrices:
 (I − A)^{−1} = I + A + A^{2} + A^{3} + ...
corresponding to the fact that the number of paths from i to j equals the number of paths of length 0 plus the number of paths of length 1 plus the number of paths of length 2 etc. The main diagonal of every adjacency matrix corresponding to a graph without loops has all zero entries.
Tradeoffs as a data structure
When used as a data structure, the main competitor for the adjacency matrix is the adjacency list. Because each entry in the adjacency matrix requires only one bit, they can be represented in a very compact way, occupying only n^{2}/8 bytes of contiguous space, where n is the number of vertices. Besides just avoiding wasted space, this compactness encourages locality of reference.
On the other hand, for a sparse graph, adjacency lists win out, because they do not use any space to represent edges which are not present. Using a naive linked list implementation on a 32bit computer, an adjacency list for an undirected graph requires about 16e bytes of storage, where e is the number of edges.
Noting that a simple graph can have at most n^{2} edges, allowing loops, we can let d = e/n^{2} denote the density of the graph. Then, 16e > n^{2}/8, or the adjacency list representation occupies more space, precisely when d > 1/128. Thus a graph must be sparse indeed to justify an adjacency list representation.
Besides the space tradeoff, the different data structures also facilitate different operations. Finding all vertices adjacent to a given vertex in an adjacency list is as simple as reading the list. With an adjacency matrix, an entire row must instead be scanned, which takes O(n) time. Whether there is an edge between two given vertices can be determined at once with an adjacency matrix, while requiring time proportional to the minimum degree of the two vertices with the adjacency list.ja:隣接行列