FACTFORMER: A TRANSFORMER BASED FACT CHECKER
| dc.contributor.AUBidnumber | 202229878 | |
| dc.contributor.advisor | Elbassuoni, Shady | |
| dc.contributor.author | Adesokan, Ademola | |
| dc.contributor.commembers | El Hajj, Izzat | |
| dc.contributor.commembers | Safa, Haidar | |
| dc.contributor.degree | MS | |
| dc.contributor.department | Department of Computer Science | |
| dc.contributor.faculty | Faculty of Arts and Sciences | |
| dc.date | 2023 | |
| dc.date.accessioned | 2023-09-07T11:06:19Z | |
| dc.date.available | 2023-09-07T11:06:19Z | |
| dc.date.issued | 2023-09-07 | |
| dc.date.submitted | 2023-09-05 | |
| dc.description.abstract | Misinformation can undermine public trust and lead to misguided actions based on unreliable sources and fact-checking efforts. Traditional manual fact-checking systems suffer from several challenges, including issues related to scaling, performance, and complexity. In response to this challenge, we introduce FactFormer, an automatic fact-checking system that retrieves evidence from trustworthy sources for a given claim and subsequently classifies the claims into different labels based on the retrieved evidence. Our retrieval model adopts the extractive question-answering technique. This approach treats claims as questions and trusted sources as context from which evidence, construed as answers, is retrieved. We harnessed the capabilities of the Bidirectional Encoder Representations from Transformers (BERT) and Distilled Bidirectional Encoder Representations from Transformers (DistilBERT) architectures, fine-tuning them specifically for the task of evidence extraction. Subsequently, claim verification was accomplished using a multi-headed BERT combined with a fully connected network layer. During the evaluation phase, our retrieval models demonstrated state-of-the-art results: the BERT model yielded an exact match rate of 89.89% and an F1-measure score of 93.93%, while the DistilBERT model achieved an exact match rate of 90.19% and an F1-measure score of 93.98% when evaluated with a maximum evidence length of 100 words. Our claim verification model achieved a high accuracy score of 90% using the existing manually annotated Fact Extraction and VERification (FEVER) dataset with three classes, outperforming other state-of-the-art papers. We further conducted end-to-end system experiments and evaluations using our retrieved evidence to demonstrate its ability to generalize well when compared to the manually annotated FEVER-2 dataset with two labels. Our claim verification model performance on FEVER-2 with DistilBERT achieved 87.14%, outperforming the manual FEVER-2 with an 86.54% accuracy score. In conclusion, our approach significantly enhances fact-checking by improving both evidence retrieval and claim classification. | |
| dc.identifier.uri | http://hdl.handle.net/10938/24149 | |
| dc.language.iso | en | |
| dc.subject | Fact-checking, Transformers, Evidence Retrieval, Claim Verification, FEVER | |
| dc.title | FACTFORMER: A TRANSFORMER BASED FACT CHECKER | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- AdesokanAdemola_2023.pdf
- Size:
- 1.38 MB
- Format:
- Adobe Portable Document Format
- Description:
- Main thesis, 2023
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 1.65 KB
- Format:
- Item-specific license agreed upon to submission
- Description: