Ranté Markov: Béda antarrépisi

Konten dihapus Konten ditambahkan
Ngaganti kaca ku 'a'
m Malikkeun éditan 155.41.79.115, diganti deui ka vérsi ahir ku MerlIwBot
Baris ka-1:
Dina [[matematik]], '''ranté Markov''' nyaéta [[prosés stokastik]] nu ngagunakeun [[Markov property]].
a
Salaku prosés, jarak ti heula taya hubunganana jeung jarak ayeuna di dipikanyaho.
 
Dina kasus discrete-time, proses ngabogaan sekuen <var>X<sub>1</sub>, X<sub>2</sub>, X<sub>3</sub>, ...</var> tina [[variabel acak]].
Domain tina variabel ieu disebut ''tetapan ruang'', nu mibanda nilai <var>X<sub>n</sub></var> salila dina waktu nu ditangtukeun <var>n</var>.
Lamun conditional distribution ''X''<sub>''n''+1</sub> dina tetapan ti heula salaku fungsi ''X''<sub>''n''</sub> sorangan,
 
: <math> P(X_{n+1}|X_0, X_1, X_2, \ldots, X_n) = P(X_{n+1}|X_n) </math>
 
saterusna prosés ieu disebut mibanda '''sipat Markov'''.
 
Ranté Markov aya sanggeus [[Andrei Andreevich Markov|A.A. Markov]], nu mimiti ngahasilkeun ieu proses dina taun (1906). Dijadikeun bisa diitung keur tetapan dina ruang anu "tidak tebatas" ku [[Andrey Nikolaevich Kolmogorov|Kolmogorov]] (1936).
Ranté Markov patali jeung [[Brownian motion|gerak Brown]] sarta [[ergodic hypothesis|hipotesa ergodik]], dua topik penting dina widang fisik di awal [[twentieth century|abad kaduapuluh]],
tapi Markov digunakeun oge di luar widang matematika, saperti [[law of large numbers|hukum wilangan gede]] dina kajadian anu pakait.
 
== Sifat ranté Markov ==
 
Ranté Markov dicirikeun ku conditional distribution
 
: <math> P(X_{n+1}| X_n)\, </math>
 
nu disebut prosés ''transition probability''.
Kadangkala disebut oge "one-step" transition probability.
''Transition probability'' dua tahap, tilu tahap atawa leuwih dijentrekeun tina ''transition probability'' satahap sarta sifat Markov:
 
: <math> P(X_{n+2}|X_n) = \int P(X_{n+2},X_{n+1}|X_n)\,dX_{n+1}
= \int P(X_{n+2}|X_{n+1}) \, P(X_{n+1}|X_n) \, dX_{n+1}</math>
 
Saperti,
 
: <math> P(X_{n+3}|X_n) = \int P(X_{n+3}|X_{n+2}) \int P(X_{n+2}|X_{n+1}) \, P(X_{n+1}|X_n) \, dX_{n+1} \, dX_{n+2}</math>
 
Rumus ieu nyaruakeun keur kayaan waktu nu teu ''teratur'' di mangsa datang ''n''+''k'' ku cara ngalikeun transition probabilities sarta nga-''integral''-keun waktu ''k''.
 
[[Marginal distribution]] ''P''(''X''<sub>''n''</sub>) nyaeta distribusi nu ditangtukeun dina waktu ''n''.
Distiribusi awal nyaeta ''P''(''X''<sub>0</sub>).
Evolusi proses ngaliwatan sakali tahap waktu nu dijentrekeun ku
 
: <math> P(X_{n+1}) = \int P(X_{n+1}|X_n)\,P(X_n)\,dX_n </math>
 
Ieu ngarupakeun versi [[Frobenius-Perron equation]].
Didinya aya hiji atawa leuwih ''tetapan'' distribusi π saperti
 
:<math> \pi(X) = \int P(X|Y)\,\pi(Y)\,dY</math>
 
numana ''Y'' ngan sakadar ngaran variabel integrasi.
Saperti distribution π disebut ''stationary distribution'' atawa ''steady-state distribution''.
Stationary distribution nyaeta [[eigenfunction]] tina fungsi ''conditional distribution'', nu pakait jeung [[eigenvalue]] 1.
 
Whether or not there is a stationary distribution,
and whether or not it is unique if it does exist,
are determined by certain properties of the process.
''Irreducible'' means that every state is accessible from every other state.
''Aperiodic'' means that there exists at least one state for which the transition from that state to itself is possible. ''Positive recurrent'' means that the expected return time is finite for every state.
Sometimes the terms ''indecomposable'', ''acyclic'', and ''persistent'' are used as synonyms for "irreducible", "aperiodic", and "recurrent", respectively.
 
If the Markov chain is positive recurrent,
there exists a stationary distribution.
If it is positive recurrent and irreducible,
there exists a unique stationary distribution,
and furthermore the process constructed by taking the stationary distribution as the initial distribution is [[ergodic theory|ergodic]].
Then the average of a function ''f'' over samples of the Markov chain is equal to the average with respect to the stationary distribution,
 
:<math> \lim_{n\rightarrow\infty}\; \frac{1}{n} \; \sum_{k=0}^{n-1} f(X_k)
= \int f(X)\,\pi(X)\,dX </math>
 
In particular,
this holds for ''f'' equal to the identity function.
Mangka nilai average sampel dina waktu nyaeta sarua jeung [[nilai ekspektasi]] tina sebaran ''stationary''.
 
Furthermore,
the equivalence of averages also holds if ''f'' is the [[indicator function]] on some subset ''A'' of the state space.
 
:<math> \lim_{n\rightarrow\infty}\; \frac{1}{n} \; \sum_{k=0}^{n-1} \chi_A(X_k)
= \int_A \pi(X)\,dX = \mu_{\pi}(A) </math>
 
where μ<sub>π</sub> is the measure induced by π.
This makes it possible to approximate the stationary distribution by a [[histogram]] or other density estimate of a sequence of samples.
 
== Markov chains dina ruang diskrit state ==
 
If the state space is [[finite]],
the transition probability distribution can be represented as a matrix,
called the ''transition matrix'',
with the (''i'', ''j'')'th element equal to
 
:<math>P(X_{n+1}=i\mid X_n=j)</math>
 
(In this formulation, element (''i'', ''j'') is the probability of a transition from ''j'' to ''i''.
An equivalent formulation is sometimes given with element (''i'', ''j'') equal to the probability of a transition from ''i'' to ''j''.
In that case the transition matrix is just the [[transpose]] of the one given here.)
 
For a discrete state space,
the integrations in the ''k''-step transition probability are summations,
and can be computed as the ''k''<nowiki>'</nowiki>th power of the transition matrix.
That is, if ''P'' is the one-step transition matrix, then
''P''<sup>''k''</sup> is the transition matrix for the ''k''-step transition.
 
Writing ''P'' for the transition matrix,
a stationary distribution is a vector which satisfies the equation
 
:<math> P\pi = \pi\,</math>
 
In this case, the stationary distribution is an [[eigenvector]] of the transition matrix,
associated with the [[eigenvalue]] 1.
If the transition matrix ''P'' is positive recurrent, irreducible, and aperiodic,
then ''P''<sup>''k''</sup> converges elementwise to a matrix in which each column is the unique stationary distribution.
 
A transition matrix which is positive (that is, every element of the matrix is positive)
is irreducible, aperiodic, and positive recurrent.
A matrix is a [[stochastic matrix]] if and only if it is the matrix of transition probabilities of some Markov chain.
 
== Scientific applications ==
 
Markov chains are used to model various processes in [[queueing theory]] and [[statistics]], and can also be used as a signal model in [[entropy coding]] techniques such as [[arithmetic coding]]. Markov chains also have many biological applications, particularly [[population process]]es, which are useful in modelling processes that are (at least) analogous to biological populations. Markov chains have been used in [[bioinformatics]] as well. An example is the [[genemark algorithm]] for coding region/gene prediction.
 
Markov processes can also be used to generate superficially "real-looking" text given a sample document: they are used in various pieces of recreational "parody generator" software (see [[Jeff Harrison]]).
 
== Tempo oge ==
 
* [[Hidden Markov model]]
* [[Examples of Markov chains]]
* [[Mark V Shaney]]
 
== Rujukan ==
 
* A.A. Markov. "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". ''Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete'', 2-ya seriya, tom 15, pp 135-156, 1906.
 
* A.A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. ''Dynamic Probabilistic Systems, volume 1: Markov Chains''. John Wiley and Sons, 1971.
 
* Leo Breiman. ''Probability''. Original edition published by Addison-Wesley, 1968; reprinted by Society for Industrial and Applied Mathematics, 1992. ISBN 0-89871-296-3. ''(See Chapter 7.)''
 
* J.L. Doob. ''Stochastic Processes''. New York: John Wiley and Sons, 1953. ISBN 0-471-52369-0.
 
== Tumbu kaluar ==
 
* [http://crypto.mat.sbg.ac.at/~ste/diss/node6.html Markov Chains]
 
* [http://www.cs.bell-labs.com/cm/cs/pearls/sec153.html Generating Text] ''(About generating random text using a Markov chain.)''
 
* [http://www.mathworks.com/company/newsletters/news_notes/clevescorner/oct02_cleve.html The World's Largest Matrix Computation] ''(Google's PageRank as the stationary distribution of a random walk through the Web.)''
 
* [http://www.gnu.org/software/emacs/manual/html_node/emacs_473.html Disassociated Press] in [[Emacs]] approximates a Markov process
 
[[Kategori:Téori probabilitas]]
[[Kategori:Prosés stokastik]]
 
[[af:Markovketting]]
[[ar:سلسلة ماركوف]]
[[bg:Марковска верига]]
[[ca:Cadena de Màrkov]]
[[cs:Markovův řetězec]]
[[da:Markov-kæde]]
[[de:Markow-Kette]]
[[el:Αλυσίδα Μαρκόφ]]
[[en:Markov chain]]
[[es:Cadena de Markov]]
[[et:Markovi ahel]]
[[eu:Markov kate]]
[[fa:زنجیره مارکوف]]
[[fi:Markovin ketju]]
[[fr:Chaîne de Markov]]
[[gl:Cadea de Markov]]
[[he:שרשרת מרקוב]]
[[hr:Markovljev lanac]]
[[hu:Markov-lánc]]
[[is:Markov-keðja]]
[[it:Processo markoviano]]
[[ja:マルコフ連鎖]]
[[ko:마르코프 연쇄]]
[[lt:Markovo grandinė]]
[[nl:Markov-keten]]
[[pl:Łańcuch Markowa]]
[[pt:Cadeias de Markov]]
[[ro:Lanț Markov]]
[[ru:Цепь Маркова]]
[[sh:Markovljev lanac]]
[[simple:Markov chain]]
[[sr:Ланци Маркова]]
[[sv:Markovkedja]]
[[tr:Markov zinciri]]
[[uk:Ланцюги Маркова]]
[[ur:مارکوو زنجیر]]
[[vi:Xích Markov]]
[[zh:马尔可夫链]]