Matriks (matematika): Perbedaan antara revisi

Konten dihapus Konten ditambahkan
Gombang (bicara | kontrib)
k copyedit sedikit
Tag: Suntingan visualeditor-wikitext
123569yuuift (bicara | kontrib)
Tag: Suntingan perangkat seluler Suntingan peramban seluler Suntingan seluler lanjutan
Baris 184:
\end{pmatrix}
</math>
 
== Persamaan linear ==<!-- [[Pemisahan matriks]] ada di sini. Tolong jangan berubah. -->
{{Main|Persamaan linear|Sistem persamaan linear}}
Matriks dapat digunakan untuk menulis dan bekerja secara kompak dengan persamaan linear berganda, yaitu sistem persamaan linear. Misalnya, bila '''A''' adalah matriks ''m'' oleh ''n'', '''x''' menunjukkan vektor kolom (yaitu, matriks '' n '' × 1) dari '' n '' variabel ''x'' {{sub|1}}, ''x'' {{sub|2}}, ..., ''x'' {{sub|''n''}}, dan ''' b ''' adalah vektor ''m''× 1, maka persamaan matriksnya ialah
:<math>\mathbf{Ax} = \mathbf{b}</math>
 
setara dengan sistem persamaan linear<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=I.2.21 and 22}}</ref>
:<math>\begin{align}
a_{1,1}x_1 + a_{1,2}x_2 + &\cdots + a_{1,n}x_n = b_1 \\
&\ \ \vdots \\
a_{m,1}x_1 + a_{m,2}x_2 + &\cdots + a_{m,n}x_n = b_m
\end{align}</math>
 
Dengan menggunakan matriks, hal ini dapat diselesaikan secara lebih kompak daripada yang mungkin dilakukan dengan menuliskan semua persamaan secara terpisah. Jika ''n'' = ''m'' dan persamaan [[persamaan independen|independen]], maka ini dapat dilakukan dengan menulis
:<math>\mathbf{x} = \mathbf{A}^{-1} \mathbf{b}</math>
 
dimana '''A'''{{sup|−1}} adalah [[matriks invers]] dari '''A'''. Bila '''A''' tidak memiliki invers, solusi — jika ada — dapat ditemukan saat menggunakan [[ivers umum]].
 
== Transformasi linear ==
{{Main|Transformasi linear|Matriks transformasi}}
[[Berkas:Area parallellogram as determinant.svg|thumb|right|Vektor yang diwakili oleh matriks 2-kali-2 sesuai dengan sisi persegi satuan yang diubah menjadi jajaran genjang.]]
Matrices and matrix multiplication mengungkapkan fitur penting mereka saat terkait dengan ''transformasi linear'', juga dikenal sebagai ''peta linear''. <span id="linear_maps">A real ''m''-by-''n'' matriks '''A''' menimbulkan transformasi linear '''R'''{{sup|''n''}} → '''R'''{{sup|''m''}} memetakan setiap vektor '''x''' pada '''R'''{{sup|''n''}} ke (matriks) produk '''Ax''', yang merupakan vektor dalam '''R'''{{sup|''m''}}. Sebaliknya, setiap transformasi linear ''f'': '''R'''{{sup|''n''}} → '''R'''{{sup|''m''}} muncul dari unik ''m''-by-''n'' matriks '''A''': secara eksplisit, {{nowrap|(''i'', ''j'')-entry}} dari '''A''' is the ''i''{{sup|th}} coordinate of ''f''('''e'''{{sub|''j''}}), where {{nowrap begin}}'''e'''{{sub|''j''}} = (0,...,0,1,0,...,0){{nowrap end}} adalah [[vektor satuan]] dengan 1 pada ''j''{{sup|th}} posisi dan 0 di tempat lain.</span> The matrix '''A''' dikatakan mewakili peta linear '' f '', dan '' 'A' '' disebut '' matriks transformasi '' dari ''f''.
 
Misalnya matriks 2x2
:<math>\mathbf{A} = \begin{bmatrix} a & c\\b & d \end{bmatrix}</math>
 
dapat dilihat sebagai transformasi dari [[satuan persegi]] menjadi [[jajaran genjang]] dengan simpul pada {{nowrap|(0, 0)}}, {{nowrap|(''a'', ''b'')}}, {{nowrap|(''a'' + ''c'', ''b'' + ''d'')}}, dan {{nowrap|(''c'', ''d'')}}. Jajar genjang yang digambarkan di sebelah kanan diperoleh dengan mengalikan '' 'A' '' dengan masing-masing vektor kolom <math>\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix}</math>, dan <math>\begin{bmatrix}0 \\ 1\end{bmatrix}</math> gantinya. Vektor-vektor ini menentukan simpul dari persegi satuan.
 
Tabel berikut menunjukkan sejumlah [[2 × 2 matriks nyata|matriks 2-kali-2]] dengan peta linier terkait '''R'''{{sup|2}}. Dokumen asli berwarna biru dipetakan ke kisi dan bentuk hijau. Asal (0,0) ditandai dengan titik hitam.
{| class="wikitable" style="text-align:center; margin:1em auto 1em auto;"
|-
| [[Pemetaan geser|geser horizontal]] with m = 1.25.
| [[Refleksi (matematika)|Refleksi]] melalui sumbu vertikal
| [[Pemetaan pemerasan]] dengan r = 3/2
| [[Penskalaan (geometri)|Penskalaan]] dengan faktor 3/2
|<span id="rotation_matrix">[[Matriks rotasi|Rotasi]] sebesar π/6 = 30°</span>
|-
| <math>\begin{bmatrix}
1 & 1.25 \\
0 & 1
\end{bmatrix}</math>
| <math>\begin{bmatrix}
-1 & 0 \\
0 & 1
\end{bmatrix}</math>
| <math>\begin{bmatrix}
\frac{3}{2} & 0 \\
0 & \frac{2}{3}
\end{bmatrix}</math>
|<math>\begin{bmatrix}
\frac{3}{2} & 0 \\
0 & \frac{3}{2}
\end{bmatrix}</math>
|<math>\begin{bmatrix}
\cos\left(\frac{\pi}{6}\right) & -\sin\left(\frac{\pi}{6}\right) \\
\sin\left(\frac{\pi}{6}\right) & \cos\left(\frac{\pi}{6}\right)
\end{bmatrix}</math>
|-
| width="20%" | [[Berkas:VerticalShear m=1.25.svg|175px]]
| width="20%" | [[Berkas:Flip map.svg|150px]]
| width="20%" | [[Berkas:Squeeze r=1.5.svg|150px]]
| width="20%" | [[Berkas:Scaling by 1.5.svg|125px]]
| width="20%" | [[Berkas:Rotation by pi over 6.svg|125px]]
|}
 
Di bawah [[bijection | korespondensi 1-ke-1]] antara matriks dan peta linier, perkalian matriks sesuai dengan [[komposisi fungsi|komposisi]] ​​peta:<ref>{{Harvard citations |last1=Greub |year=1975 |nb=yes |loc=Section III.2}}</ref> jika matriks a ''k'' - ''m'' '' 'B' '' mewakili peta linear lainnya ''g'': '''R'''{{sup|''m''}} → '''R'''{{sup|''k''}}, maka komposisi {{nowrap|''g'' ∘ ''f''}} diwakili oleh '''BA''' karena
:(''g'' ∘ ''f'')('''x''') = ''g''(''f''('''x''')) = ''g''('''Ax''') = '''B'''('''Ax''') = ('''BA''')'''x'''.
 
Persamaan terakhir mengikuti dari asosiativitas perkalian matriks yang disebutkan di atas.
 
[[Peringkat matriks]] '''A''' adalah jumlah maksimum vektor baris [[bebas linear|bebas linear]] dari matriks, yang sama dengan jumlah maksimum kolom bebas linier.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition II.3.3}}</ref> Persamaan dengan itu adalah [[dimensi Hamel|dimensi]] dari [[gambar (matematika)|gambar]] dari peta linear yang diwakili oleh '''A'''.<ref>{{Harvard citations |last1=Greub |year=1975 |nb=yes |loc=Section III.1}}</ref> [[Teorema pangkat–nulitas]] menyatakan bahwa dimensi [[kernel (matriks)|kernel]] dari sebuah matriks ditambah pangkat sama dengan jumlah kolom dari matriks tersebut.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Theorem II.3.22}}</ref>
 
== Matriks persegi ==
{{Main|Matriks persegi}}
[[Matriks persegi]] adalah matriks dengan jumlah baris dan kolom yang sama.<ref name=":4" /> Matriks ''n''-oleh-''n'' dikenal sebagai matriks kuadrat berorde ''n.'' Dua matriks kuadrat berorde yang sama dapat ditambahkan dan dikalikan.
Entri ''a''{{sub |''ii''}} membentuk [[diagonal utama]] dari matriks persegi. Mereka terletak pada garis imajiner yang membentang dari sudut kiri atas ke sudut kanan bawah matriks.
 
=== Jenis utama ===
:{| class="wikitable" style="float:right; margin:0ex 0ex 2ex 2ex;"
|-
! Nama !! Contoh dengan ''n'' = 3
|-
| [[Matriks diagonal]] || style="text-align:center;" | <math>
\begin{bmatrix}
a_{11} & 0 & 0 \\
0 & a_{22} & 0 \\
0 & 0 & a_{33} \\
\end{bmatrix}
</math>
|-
| [[Matriks segitiga bawah]] || style="text-align:center;" | <math>
\begin{bmatrix}
a_{11} & 0 & 0 \\
a_{21} & a_{22} & 0 \\
a_{31} & a_{32} & a_{33} \\
\end{bmatrix}
</math>
|-
| [[Matriks segitiga atas]] || style="text-align:center;" | <math>
\begin{bmatrix}
a_{11} & a_{12} & a_{13} \\
0 & a_{22} & a_{23} \\
0 & 0 & a_{33} \\
\end{bmatrix}
</math>
|}
 
==== Matriks diagonal dan segitiga ====
Jika semua entri '''A''' di bawah diagonal utama adalah nol, '''A''' disebut '' atas [[matriks segitiga]] ''. Demikian pula jika semua entri '' A '' di atas diagonal utama adalah nol, '''A''' disebut '' matriks segitiga bawah ''. Jika semua entri di luar diagonal utama adalah nol, '''A''' disebut [[matriks diagonal]].
<!--
==== Matriks identitas ====
{{Main|Matriks identitas}}
The ''identity matrix'' '''I'''{{sub|''n''}} of size ''n'' is the ''n''-by-''n'' matrix in which all the elements on the [[main diagonal]] are equal to 1 and all other elements are equal to 0, for example,
:<math>
\mathbf{I}_1 = \begin{bmatrix} 1 \end{bmatrix},
\ \mathbf{I}_2 = \begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix},
\ \cdots ,
\ \mathbf{I}_n = \begin{bmatrix}
1 & 0 & \cdots & 0 \\
0 & 1 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 1
\end{bmatrix}
</math>
It is a square matrix of order ''n'', and also a special kind of [[diagonal matrix]]. It is called an identity matrix because multiplication with it leaves a matrix unchanged:
:{{nowrap begin}}'''AI'''{{sub|''n''}} = '''I'''{{sub|''m''}}'''A''' = '''A'''{{nowrap end}} for any ''m''-by-''n'' matrix '''A'''.
 
A nonzero scalar multiple of an identity matrix is called a ''scalar'' matrix. If the matrix entries come from a field, the scalar matrices form a group, under matrix multiplication, that is isomorphic to the multiplicative group of nonzero elements of the field.
 
====Symmetric or skew-symmetric matrix====
A square matrix '''A''' that is equal to its transpose, that is, {{nowrap begin}}'''A''' = '''A'''{{sup|T}}{{nowrap end}}, is a [[symmetric matrix]]. If instead, '''A''' is equal to the negative of its transpose, that is, {{nowrap begin}}'''A''' = −'''A'''{{sup|T}},{{nowrap end}} then '''A''' is a [[skew-symmetric matrix]]. In complex matrices, symmetry is often replaced by the concept of [[Hermitian matrix|Hermitian matrices]], which satisfy '''A'''{{sup|∗}} = '''A''', where the star or [[asterisk]] denotes the [[conjugate transpose]] of the matrix, that is, the transpose of the [[complex conjugate]] of '''A'''.
 
By the [[spectral theorem]], real symmetric matrices and complex Hermitian matrices have an [[eigenbasis]]; that is, every vector is expressible as a [[linear combination]] of eigenvectors. In both cases, all eigenvalues are real.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Theorem 2.5.6}}</ref> This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see [[#Infinite matrices|below]].
 
====Invertible matrix and its inverse====
A square matrix '''A''' is called ''[[invertible matrix|invertible]]'' or ''non-singular'' if there exists a matrix '''B''' such that
:'''AB''' = '''BA''' = '''I'''{{sub|''n''}} ,<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition I.2.28}}</ref><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition I.5.13}}</ref>
where '''I'''{{sub|''n''}} is the ''n''×''n'' [[identity matrix]] with 1s on the [[main diagonal]] and 0s elsewhere. If '''B''' exists, it is unique and is called the ''[[Invertible matrix|inverse matrix]]'' of '''A''', denoted '''A'''{{sup|−1}}.
 
====Definite matrix====
{| class="wikitable" style="float:right; text-align:center; margin:0ex 0ex 2ex 2ex;"
|-
! [[Positive definite matrix]] !! [[Indefinite matrix]]
|-
| <math>\begin{bmatrix}
\frac{1}{4} & 0 \\
0 & 1 \\
\end{bmatrix}</math>
| <math>\begin{bmatrix}
\frac{1}{4} & 0 \\
0 & -\frac{1}{4}
\end{bmatrix}</math>
|-
| ''Q''(''x'', ''y'') = 1/4 ''x''{{sup|2}} + ''y''{{sup|2}}
| ''Q''(''x'', ''y'') = 1/4 ''x''{{sup|2}} − 1/4 ''y''{{sup|2}}
|-
| [[File:Ellipse in coordinate system with semi-axes labelled.svg|150px]] <br>Points such that ''Q''(''x'',''y'')=1 <br> ([[Ellipse]]).
| [[File:Hyperbola2 SVG.svg|150px]] <br> Points such that ''Q''(''x'',''y'')=1 <br> ([[Hyperbola]]).
|}
A symmetric ''n''×''n''-matrix '''A''' is called [[positive-definite matrix|''positive-definite'']] if the associated [[quadratic form]]
:<span id="quadratic forms">''f''{{spaces|hair}}('''x''') = '''x'''{{sup|T}}'''A{{nbsp}}x'''</span>
 
has a positive value for every nonzero vector '''x''' in '''R'''{{sup|''n''}}. If ''f''{{spaces|hair}}('''x''') only yields negative values then '''A''' is [[definiteness of a matrix#Negative definite|''negative-definite'']]; if ''f'' does produce both negative and positive values then '''A''' is [[definiteness of a matrix#Indefinite|''indefinite'']].<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Chapter 7}}</ref> If the quadratic form ''f'' yields only non-negative values (positive or zero), the symmetric matrix is called ''positive-semidefinite'' (or if only non-positive values, then negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
 
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive, that is, the matrix is positive-semidefinite and it is invertible.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Theorem 7.2.1}}</ref> The table at the right shows two possibilities for 2-by-2 matrices.
 
Allowing as input two different vectors instead yields the [[bilinear form]] associated to '''A''':
:''B''{{sub|'''A'''}} ('''x''', '''y''') = '''x'''{{sup|T}}'''Ay'''.<ref>{{Harvard citations |last1=Horn |last2=Johnson |year=1985 |nb=yes |loc=Example 4.0.6, p. 169}}</ref>
 
====Orthogonal matrix====
{{Main|Orthogonal matrix}}
An ''orthogonal matrix'' is a [[#Square matrices|square matrix]] with [[real number|real]] entries whose columns and rows are [[orthogonal]] [[unit vector]]s (that is, [[orthonormality|orthonormal]] vectors). Equivalently, a matrix '''A''' is orthogonal if its [[transpose]] is equal to its [[invertible matrix|inverse]]:
:<math>\mathbf{A}^\mathrm{T}=\mathbf{A}^{-1}, \,</math>
which entails
:<math>\mathbf{A}^\mathrm{T} \mathbf{A} = \mathbf{A} \mathbf{A}^\mathrm{T} = \mathbf{I}_n,</math>
where '''I'''{{sub|''n''}} is the [[identity matrix]] of size ''n''.
 
An orthogonal matrix '''A''' is necessarily [[invertible matrix|invertible]] (with inverse {{nowrap|1='''A'''{{sup|&minus;1}} = '''A'''{{sup|T}}}}), [[unitary matrix|unitary]] ({{nowrap|1='''A'''{{sup|&minus;1}} = '''A'''*}}), and [[normal matrix|normal]] ({{nowrap|1='''A'''*'''A''' = '''AA'''*}}). The [[determinant]] of any orthogonal matrix is either {{math|+1}} or {{math|−1}}. A ''special orthogonal matrix'' is an orthogonal matrix with [[determinant]] +1. As a [[linear transformation]], every orthogonal matrix with determinant {{math|+1}} is a pure [[rotation (mathematics)|rotation]] without reflection, i.e., the transformation preserves the orientation of the transformed structure, while every orthogonal matrix with determinant {{math|-1}} reverses the orientation, i.e., is a composition of a pure [[reflection (mathematics)|reflection]] and a (possibly null) rotation. The identity matrices have determinant {{math|1}}, and are pure rotations by an angle zero.
 
The [[complex number|complex]] analogue of an orthogonal matrix is a [[unitary matrix]].
 
===Main operations===
 
====Trace====
The [[trace of a matrix|trace]], tr('''A''') of a square matrix '''A''' is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned [[#non commutative|above]], the trace of the product of two matrices is independent of the order of the factors:
: tr('''AB''') = tr('''BA''').
This is immediate from the definition of matrix multiplication:
:<math>\operatorname{tr}(\mathbf{AB}) = \sum_{i=1}^m \sum_{j=1}^n a_{ij} b_{ji} = \operatorname{tr}(\mathbf{BA}).</math>
It follows that the trace of the product of more than two matrices is independent of [[cyclic permutation]]s of the matrices, however this does not in general apply for arbitrary permutations (for example, tr('''ABC''') ≠ tr('''BAC'''), in general). Also, the trace of a matrix is equal to that of its transpose, that is,
:{{nowrap begin}}tr('''A''') = tr('''A'''{{sup|T}}){{nowrap end}}.
 
====Determinant====
{{Main|Determinant}}
[[File:Determinant example.svg|thumb|300px|right|A linear transformation on '''R'''{{sup|2}} given by the indicated matrix. The determinant of this matrix is −1, as the area of the green parallelogram at the right is 1, but the map reverses the [[orientation (mathematics)|orientation]], since it turns the counterclockwise orientation of the vectors to a clockwise one.]]
 
The ''determinant'' of a square matrix '''A''' (denoted det('''A''') or |'''A'''|<ref name=":2" />) is a number encoding certain properties of the matrix. A matrix is invertible [[if and only if]] its determinant is nonzero. Its [[absolute value]] equals the area (in '''R'''{{sup|2}}) or volume (in '''R'''{{sup|3}}) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
 
The determinant of 2-by-2 matrices is given by
:<math>\det \begin{bmatrix}a&b\\c&d\end{bmatrix} = ad-bc.</math><ref name=":3" />
The determinant of 3-by-3 matrices involves 6 terms ([[rule of Sarrus]]). The more lengthy [[Leibniz formula for determinants|Leibniz formula]] generalises these two formulae to all dimensions.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.2.1}}</ref>
 
The determinant of a product of square matrices equals the product of their determinants:
:{{nowrap begin}}det('''AB''') = det('''A''') · det('''B''').<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Theorem III.2.12}}</ref>{{nowrap end}}
Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Corollary III.2.16}}</ref> Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the [[Laplace expansion]] expresses the determinant in terms of [[minor (linear algebra)|minors]], that is, determinants of smaller matrices.<ref>{{Harvard citations |last1=Mirsky |year=1990 |nb=yes |loc=Theorem 1.4.1}}</ref> This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve [[linear system]]s using [[Cramer's rule]], where the division of the determinants of two related square matrices equates to the value of each of the system's variables.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Theorem III.3.18}}</ref>
 
====Eigenvalues and eigenvectors====
{{Main|Eigenvalue, eigenvector and eigenspace|l1=Eigenvalues and eigenvectors}}
A number λ and a non-zero vector '''v''' satisfying
:'''Av''' = λ'''v'''
are called an ''eigenvalue'' and an ''eigenvector'' of '''A''', respectively.<ref>''Eigen'' means "own" in [[German language|German]] and in [[Dutch language|Dutch]].</ref><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.4.1}}</ref> The number λ is an eigenvalue of an ''n''×''n''-matrix '''A''' if and only if '''A'''−λ'''I'''{{sub|''n''}} is not invertible, which is [[logical equivalence|equivalent]] to
:<math>\det(\mathsf{A}-\lambda \mathsf{I}) = 0.</math><ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Definition III.4.9}}</ref>
The polynomial ''p''{{sub|'''A'''}} in an [[indeterminate (variable)|indeterminate]] ''X'' given by evaluation of the determinant det(''X'''''I'''{{sub|''n''}}−'''A''') is called the [[characteristic polynomial]] of '''A'''. It is a [[monic polynomial]] of [[degree of a polynomial|degree]] ''n''. Therefore the polynomial equation ''p''{{sub|'''A'''}}(λ){{nbsp}}={{nbsp}}0 has at most ''n'' different solutions, that is, eigenvalues of the matrix.<ref>{{Harvard citations |last1=Brown |year=1991 |nb=yes |loc=Corollary III.4.10}}</ref> They may be complex even if the entries of '''A''' are real. According to the [[Cayley–Hamilton theorem]], {{nowrap begin}}''p''{{sub|'''A'''}}('''A''') = '''0'''{{nowrap end}}, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the [[zero matrix]].-->
 
== Lihat pula ==