Artikel ieu keur dikeureuyeuh, ditarjamahkeun tina basa Inggris.
Bantuanna didagoan pikeun narjamahkeun.

Klasifikasi Naive Bayesian mangrupa métodeu klasifikasi probabiliti sederhana. Watesan nu leuwih jentre dina kaayaan modél probibiliti nyaéta independent feature model. Watesan naive Bayes dumasar kana kanyataan yén modél probabiliti bisa diturunkeun ngagunakeun Bayes' Theorem (keur ngahargaan Thomas Bayes) sarta pakait kacida jeung asumsi bébas nu teu kapanggih di alam nyata, sabab kitu mangrupa (sacara ngahaja) naive. Gumantung kana katepatan pasti tina modél probiliti, klasifikasi naive Bayes bisa direntetkeun kacida efisien dina susunan supervised learning. Dina pamakéan praktis, paraméter estimasi keur modél naive Bayes maké métodeu maximum likelihood; dina basa séjén, hiji hal bisa digawekeun mibanda modél naive Bayes bari teu nuturkeun Bayesian probability atawa ngagunakeun unggal métodeu Bayesian.

Model probabiliti naive Bayes

édit

Sacara abstrak, modél probabiliti klasifikasi mangrupa modél kondisional

 

dina kelas variabel terikat   mibanda sajumlah leutik hasil atawa kelas, kondisional dina sababaraha sipat variabel   nepi ka  . Masalahna lamun jumlah sipat   badag atawa waktu sipat bisa dicokot tina nilai wilangan nu badag, mangka dumasar kana modél dina tabel probabiliti mangrupa hal infeasible. Mangak kudu dirumuskeun duei modélna keur nyieun nu leuwih hadé.

Ngangunakeun Bayes' theorem, dituliskeun

 

Dina praktékna urang ngan museurkeun kana pembilang, pembagi heunteu gumantung kana   sarta nilai sipat   dibérékeun, mangka pembagi mangrupa konstanta. Pembilang sarua jeung modél joint probability

 

nu bisa dituliskeun saperti di handap, ngagunakeun pamakéan pengulangan tina harti conditional probability:

 
 
 
 
 

jeung saterusna. Kiwari asumsi "naive" kondisional bébas loba dipaké: anggap unggal sipat   mangrupa independent dina unggal sipat   keur  . Ieu hartina yen

 

sarta modél gabungan ditembongkeun ku

 
 

Ieu hartina yén dina kaayaan asumsi bébas di luhur, sebaran kondisional dina kelas variabel   bisa ditembongkeun saperti kieu:

 

nu mana   mangrupa faktor skala terikat ngan dina  , contona, konstanta lamun nilai sipat variabel dipikanyaho.

modél dina bentuk ieu leuwih gamapang diurus, ti saprak ieu faktor disebut kelas prior   sarta sebaran probabiliti bébas  . Lamun di dinya kelas   classes sarta lamun modél keur   bisa digambarkeun dina watesan paraméter  , mangka pakait jeung modél naive Bayes mibanda paraméter (k - 1) + n r k. Dina prakték, salawasna   (klasifikasi biner) sarta   (Bernoulli variable salaku sipat) mangrupa hal umum, sarta jumlah wilangan paraméter tina modél naive Bayes nyaéta  , nu mana   mangrupa wilangan sipat biner nu dipaké keur prediksi.

Parameter estimasi

édit

Dina watesan supervised learning, kahayang nga-estimasi paraméter tina modél sebaran. Sabab asumsi sipat bébas, éta cukup keur estimasi kelas prior jeung modél sipat kondisional bébas, ku maké métodeu maximum likelihood, Bayesian inference atawa prosedur paraméter estimasi séjénna.

Ngawangun klasifikasi tina model probabiliti

édit

Diskusi leuwih jentre diturunkeun tina sipat modél bébas, nyaéta, modél probabiliti naive Bayes. Klasifikasi naive Bayes ngombinasikeun ieu modél nu mibanda decision rule. Salah sahiji aturan nu umum keur nangtukeun hipotesa nu leuwih mungkin; dipikanyaho salaku aturan kaputusan maksimum posterior atawa MAP. Klasifikasi pakait mangrupa fungsi   nu dihartikeun saperti:

 

Diskusi

édit

Klasifikasi naive Bayes mibanda sababaraha sipat nu ilahar dipaké dina prakték, despite the fact that the far-réaching independence assumptions are often violated. Like all probabilistic classifiers under the MAP decision rule, it arrives at the correct classification as long as the correct class is more probable than any other class; class probabilities do not have to be estimated very well. In other words, the overall classifier is robust to serious deficiencies of its underlying naive probability modél. Other réasons for the observed success of the naive Bayes classifier are discussed in the literature cited below.

In réal life, the naive Bayes approach is more powerful than might be expected from the extreme simplicity of its modél; in particular, it is fairly robust in the presence of non-independent attributes wi. Recent théoretical analysis has shown why the naive Bayes classifier is so robust.

Conto: klasifikasi dokumen

édit

Conto di dieu pagawéan nu maké klasifikasi naive Bayesian classification keur masalah document classification. Consider the problem of classifying documents by their content, for example into spam and non-spam E-mails. Imagine that documents are drawn from a number of classes of documents which can be modélled as sets of words where the (independent) probability that the i-th word of a given document occurs in a document from class C can be written as

 

(For this tréatment, we simplify things further by assuming that the probability of a word in a document is independent of the length of a document, or that all documents are of the same length).

Then the probability of a given document D, given a class C, is

 

The question that we desire to answer is: "what is the probability that a given document D belongs to a given class C?"

Now, by their definition, (see Probability axiom)

 

and

 

Bayes' théorem manipulates these into a statement of probability in terms of likelihood.

 

Assume for the moment that there are only two classes, S and ¬S.

 

and

 

Using the Bayesian result above, we can write:

 
 

Dividing one by the other gives:

 

Which can be re-factored as:

 

Thus, the probability ratio p(S | D) / p(¬S | D) can be expressed in terms of a series of likelihood ratios. The actual probability p(S | D) can be éasily computed from log (p(S | D) / p(¬S | D)) based on the observation that p(S | D) + p(¬S | D) = 1.

Taking the logarithm of all these ratios, we have:

 

This technique of "log-likelihood ratios" is a common technique in statistics. In the case of two mutually exclusive alternatives (such as this example), the conversion of a log-likelihood ratio to a probability takes the form of a sigmoid curve: see logit for details.

Tempo ogé

édit

Sumber sejen

édit
  • Pedro Domingos and Michael Pazzani. "On the optimality of the simple Bayesian classifier under zero-one loss". Machine Learning, 29:103-­130, 1997. (also online at CiteSeer Archived 2005-11-25 di Wayback Machine: [1] Archived 2003-08-14 di Wayback Machine)
  • Irina Rish. "An empirical study of the naive Bayes classifier". IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence. (available online: PDF Archived 2004-06-13 di Wayback Machine, PostScript)

Tumbu kaluar

édit