FACTFORMER: A TRANSFORMER BASED FACT CHECKER

dc.contributor.AUBidnumber202229878
dc.contributor.advisorElbassuoni, Shady
dc.contributor.authorAdesokan, Ademola
dc.contributor.commembersEl Hajj, Izzat
dc.contributor.commembersSafa, Haidar
dc.contributor.degreeMS
dc.contributor.departmentDepartment of Computer Science
dc.contributor.facultyFaculty of Arts and Sciences
dc.date2023
dc.date.accessioned2023-09-07T11:06:19Z
dc.date.available2023-09-07T11:06:19Z
dc.date.issued2023-09-07
dc.date.submitted2023-09-05
dc.description.abstractMisinformation can undermine public trust and lead to misguided actions based on unreliable sources and fact-checking efforts. Traditional manual fact-checking systems suffer from several challenges, including issues related to scaling, performance, and complexity. In response to this challenge, we introduce FactFormer, an automatic fact-checking system that retrieves evidence from trustworthy sources for a given claim and subsequently classifies the claims into different labels based on the retrieved evidence. Our retrieval model adopts the extractive question-answering technique. This approach treats claims as questions and trusted sources as context from which evidence, construed as answers, is retrieved. We harnessed the capabilities of the Bidirectional Encoder Representations from Transformers (BERT) and Distilled Bidirectional Encoder Representations from Transformers (DistilBERT) architectures, fine-tuning them specifically for the task of evidence extraction. Subsequently, claim verification was accomplished using a multi-headed BERT combined with a fully connected network layer. During the evaluation phase, our retrieval models demonstrated state-of-the-art results: the BERT model yielded an exact match rate of 89.89% and an F1-measure score of 93.93%, while the DistilBERT model achieved an exact match rate of 90.19% and an F1-measure score of 93.98% when evaluated with a maximum evidence length of 100 words. Our claim verification model achieved a high accuracy score of 90% using the existing manually annotated Fact Extraction and VERification (FEVER) dataset with three classes, outperforming other state-of-the-art papers. We further conducted end-to-end system experiments and evaluations using our retrieved evidence to demonstrate its ability to generalize well when compared to the manually annotated FEVER-2 dataset with two labels. Our claim verification model performance on FEVER-2 with DistilBERT achieved 87.14%, outperforming the manual FEVER-2 with an 86.54% accuracy score. In conclusion, our approach significantly enhances fact-checking by improving both evidence retrieval and claim classification.
dc.identifier.urihttp://hdl.handle.net/10938/24149
dc.language.isoen
dc.subjectFact-checking, Transformers, Evidence Retrieval, Claim Verification, FEVER
dc.titleFACTFORMER: A TRANSFORMER BASED FACT CHECKER
dc.typeThesis

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AdesokanAdemola_2023.pdf
Size:
1.38 MB
Format:
Adobe Portable Document Format
Description:
Main thesis, 2023

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.65 KB
Format:
Item-specific license agreed upon to submission
Description: