Hugging face fine tune question answering

Hugging face fine tune question answering

’], [‘Q: How to deactivate my account A: IOS: Go to settings on the app, click on Edit Visual Question Answering. Language: Vietnamese. This means you must fine-tune your T5 model May 31, 2023 · 4. Let’s look at some key aspects only. My objective is to fine-tune a model on data about Manchester United's (MU's) 2021/22 season. ( I changed Distilbert to Roberta to match the pretrained . Nov 29, 2021 · Table Question Answering • Updated Jul 14, 2022 • 11. source: The source string where the question contents were found. Jun 15, 2023 · Hi All, I have a working script for fine-tuning t5 model for question answer task. ), I see that the answer The GPT-2 Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). The code, pretrained models, and fine-tuned Recent works investigate the adaptation of MLLMs to predict free-form answers as a generative task to solve medical visual question answering (Med-VQA) tasks. If you decide to use pre-trained Pythia-2. In the tutorial, we fine-tune a German GPT-2 from the Huggingface model hub. I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). BigScience Workshop org Jul 18, 2022. Fine-tuning. Contribute to ftarlaci/GPT2sQA development by creating an account on GitHub. 1. We will only use the columns question, multiple_choice_answer and image, so let’s remove the rest of the columns as well. Fine-tuning DistilBERT with the Trainer API. Training data: train-v2. from datasets import load_dataset datasets = load_dataset("squad") The datasets object itself is a DatasetDict, which contains one key for the training, validation and test set. How to use To use the model to get the features of a given text in PyTorch: Question Answering on SQUAD - Colab. Jun 7, 2023 · Models. Hello everybody. Hi all, I saw that the “questioning answering” results of Biobert on my dataset aren’t good enough, so I want to fine-tune it. Furthermore, we will delve into the integration of Weights & Biases with our training pipeline to enable efficient experiment tracking, model comparison, and hyperparameter optimization. 0 dataset. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. Mar 28, 2024 · Finally, we are ready to fine-tune our Llama-2 model for question-answering tasks. A multiple choice task is similar to question answering, except several candidate answers are provided along with a context and the model is trained to select the correct answer. I want to adapt the script for fine-tuning gpt2 model for the same question answer task on the same dataset. Out-of-scope use Dec 27, 2022 · Fine-tuning GPT-2 Small for Question Answering. 11. We will also split the dataset. mT5 Question/Answering fine tuning is generating empty sentences during inference. Finetune T5 with T5ForConditionalGeneration to multitask for Q&A and Summarization. Oct 11, 2022 · My idea is to fine-tune the base layers of the deepset/deberta-v3-large-squad2 model using the MLM and/or NSP approach, and then to re-attach the “QA” layer (s) in hopes of adapting the model to my domain, keeping the QA “knowledge” from the last layer and, ultimately, keeping the manually QA labeling to a minimum (since we do have a Dec 16, 2020 · Looking to fine-tune a model for QA/Text-Generation (not sure how to frame this) and I’m wondering how to best prepare the dataset in a way that I can feed multiple answers to the same question? My goal is to f&hellip; Oct 11, 2022 · My idea is to fine-tune the base layers of the deepset/deberta-v3-large-squad2 model using the MLM and/or NSP approach, and then to re-attach the “QA” layer (s) in hopes of adapting the model to my domain, keeping the QA “knowledge” from the last layer and, ultimately, keeping the manually QA labeling to a minimum (since we do have a Dec 16, 2020 · Looking to fine-tune a model for QA/Text-Generation (not sure how to frame this) and I’m wondering how to best prepare the dataset in a way that I can feed multiple answers to the same question? My goal is to f&hellip; Jul 2, 2021 · Fine-Tuning BERT Question Answering sequence output problem. IndoBERT trained by IndoLEM and fine-tuned on Translated SQuAD 2. Learn how to fine-tune Llama 2 with LoRA (Low Rank Adaptation) for question answering. Most examples that I see are for GPT-J or GPT-NeoX which do support the fast tokenizer, but my use case is to use a smaller model (the 125M parameter model). Jan 22, 2023 · 2. Dec 17, 2021 · Hello everybody I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). tbomez. The optimizer used is Adam with a learning rate of 1. Sep 8, 2023 · Below is code from Hugging Face that shows how to do the fine-tuning, followed by some open datasets. This is a Question Answering model for AAAI 2024 Workshop Paper: “Team Trifecta at Factify5WQA: Setting the Standard in Fact Verification with Fine-Tuning”. •. mT5 and T5 cast every NLP problem into a text-to-text format, so you can create training examples as: There are two common forms of question answering: Extractive: extract the answer from the given context. e. Feb 22, 2021 · This script only allows you to do extractive question answering (i. Jun 5, 2023. 0; mailong25; UIT-ViQuAD; MultiLingual Question Answering; This model is intended to be used for QA in the Vietnamese language so the valid set is Vietnamese only (but English works fine). Llama 2 is a family of state-of-the-art open-access large language models released by Meta today, and we’re excited to fully support the launch with comprehensive integration in Hugging Face. For that, I tried to follow Quickstart guide on Supervised Fine-tuning Trainer. de. This blog post describes how you can use LLMs to build and deploy your own app in just a few lines of Python code with the HuggingFace ecosystem. Google’s BERT model is a pre Model description. June 29, 2022. May 31, 2023 · 4. [ ] #! pip install datasets transformers. ’], [‘Q: How to deactivate my account A: IOS: Go to settings on the app, click on Edit The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. The model was fine-tuned on 32 Cloud TPU v3 cores for 200,000 steps with maximum sequence length 512 and batch size of 128. Developed by: YanSte The pipelines are a great and easy way to use models for inference. This involves posing questions about a document and identifying the answers as spans of text in the document itself. I already followed this guide and fine-tuned an english model by using the May 6, 2022 · Create your own question answering dataset, or augment an existing one using Ground Truth; Use Hugging Face datasets to combine and tokenize text; Fine-tune a BERT model on your question answering data using SageMaker training; Deploy your model to a SageMaker endpoint and visualize your results; Annotation user interface BART-LARGE finetuned on SQuADv1. If you decide to use pre-trained Pythia-1B as a basis for your fine-tuned model, please conduct your own risk and bias assessment. Mar 20, 2024 · Beginners. 318. May 11, 2022 · If all you're examples have Answer: X, where X is a word (or consecutive words) in the text ( for example ), then it's probably best to do a SQuAD-style fine-tuning with a BERT-style model. The answers are longer-form (2 to 3 sentences) and I want the model to output Feb 22, 2021 · This script only allows you to do extractive question answering (i. Context: In early 2012, NFL Commissioner Roger Goodell stated that the Jul 26, 2023 · I give it a question and context (I would guess anywhere from 200-1000 tokens), and ask it to answer the question based on the context (context is retrieved from a vectorstore using similarity search). have demonstrated incredible abilities in natural language. There is a nice Hugging Face tutorial. 2. On the HuggingFace site I've found an example that I'd like to use of a fine-tuned model. Mar 15, 2021 · Question answering bot: fine-tuning with custom dataset. Table Question Answering Table Question Answering models are capable of answering questions based on a table. Jul 15, 2022 · I am in the process of creating a custom dataset to benchmark the accuracy of the 'bert-large-uncased-whole-word-masking-finetuned-squad' model for my domain, to understand if I need to fine-tune further, etc. Sep 6, 2020 · Tutorial. Q uestion Answering (QA) is a type of natural language processing task where a model is trained to answer questions based on a given context or passage of text. The only difference is that we need a special data collator that can randomly Oct 7, 2021 · aycha October 7, 2021, 9:33am 1. While following instructions on Fine-tuning with custom datasets — transformers 4. with help of AraElectra Classifier to predicted unanswerable question. You may also further fine-tune and adapt Pythia-2. I want to build question answering system by fine tuning bert using squad1. This essay explores the potential of fine-tuning BARTpho for question answering (QA), a crucial component in building intelligent systems that can understand and respond to Vietnamese queries. Report an issue. 1560. 25e-5, and a warmup ratio of 0. You will also find links to the official documentation, tutorials, and pretrained models of RoBERTa. We need to fine-tune a LLM model with these documents and based on this document LLM model has to answer the asked questions. Language model: xlm-roberta-large. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. BART is a seq2seq model intended for both NLG and NLU tasks. I now wish to fine tune a model using this dataset, and the goal is question-answering chatbot. Jan 13, 2022 · We will use the 🤗 Datasets library to download the SQUAD question answering dataset using load_dataset(). In this setup, fine-tuning takes around 20 hours. In this page, you will learn how to use RoBERTa for various tasks, such as sequence classification, text generation, and masked language modeling. context) for a reader to extract answers from. Let’s load the dataset. You can also use the raw model for feature extraction (i. [‘Q: How to delete the account A: IOS: Go to settings on the app, click on Edit, Click on Logout, Delete the Account Android: Go to settings, Click on the three dots, Click on Logout, Delete the Account. Answers to customer questions can be drawn from those documents. RoBERTa is a robustly optimized version of BERT, a popular pretrained model for natural language processing. This is bart-large model finetuned on SQuADv1 dataset for question answering task. We train the model using over 220M words, aggregated from three main sources: an Indonesian Web Corpus (Medved and Suchomel, 2017) (90M words). Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. Pythia models work with the Hugging Face Transformers Library. My dataset contains clinical medical files - which have been taken by a nurse or physician during the patient diagnosis/routine checkup. However the instructions show how to train a model like so. Edit model card. , use Alpaca-LoRA or libraries like LangChain and FastChat ). In this lesson, we will fine-tune the BERT model on the SQuAD dataset Mar 15, 2021 · Hello everybody I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). However, if you want to persist with an approach similar to your current one, given the limited data you have, I would highly recommend considering a zero-shot approach. 0 BARTpho, a pre-trained sequence-to-sequence model specifically designed for Vietnamese, offers a powerful solution for various NLP applications. 3 documentation. NajiAboo June 7, 2023, 2:25am 1. I am trying to fine tune T5 for question and answering task using Qlora. In this tutorial, you will fine-tune a pretrained model with a deep learning framework of your choice: Fine-tune a pretrained model with 🤗 Transformers Trainer. See the question answering task page for more Let’s see how we can do this on the fly during fine-tuning using a special data collator. 0. How to use To use the model to get the features of a given text in PyTorch: Dec 10, 2020 · Hi all! Looking to fine-tune a model for QA/Text-Generation (not sure how to frame this) and I’m wondering how to best prepare the dataset in a way that I can feed multiple answers to the same question? My goal is to facilitate the creation of a unique answer to a given question that is based on the input answers. mT5 and T5 cast every NLP problem into a text-to-text format, so you can create training examples as: Jun 4, 2020 · The Hugging Face Transformers library has a BertForQuestionAnswering model that is already fine-tuned on the SQuAD dataset. Feb 15, 2024 · June 29, 2022. lseongjoo July 2, 2021, 2:26am 1. XLM-RoBERTa large for QA on Vietnamese languages (also support various languages) Overview. You can try to see how far you can get with LLMs and prompting (e. json (SQuAD 2. Llama 2 is being released with a very permissive community license and is available for commercial use. jenudi January 29, 2024, 7:12am 1. November 28, 2023. Fine-tune: deepset/xlm-roberta-large-squad2. Our goal is to refine the BERT question answering Hugging Face model's proficiency, enabling it to adeptly tackle and respond to a broader spectrum of conversational inquiries. Training and Fine-tuning The model has been fine-tuned using the QLoRA technique and HuggingFace's libraries such as accelerate, peft and transformers. Fine-tuning MT5 on XNLI. The fine-tuning process refines the model's understanding of context, allowing it to excel in tasks that require nuanced comprehension and contextual reasoning, making it a robust solution for question and answering applications in natural language processing. Model Size (after training): 420mb. But I think I am doing something incorrect as the time and space taken by the qlora model is same while fine tuning the full model. 🤗Transformers. We have domain specific pdf document. The example works on the page so clearly a pretrained model of the exists. This involves fine-tuning a model which predicts a start position and an end position in the passage. Apr 9, 2021 · 77. Subsequently, the fine-tuned model is used to create a Gradio interface, enabling users to interactively input context and questions to receive answers from the model. 0 Question answering. May 14, 2024 · For this example, we will use the VQAv2 dataset, and fine-tune the model to answer questions about images. Oct 30, 2023 · In this lesson, we will fine-tune the BERT model on the SQuAD dataset for question answering. 0 dataset for Q/A. This model is fine-tuned version of google/flan-t5-large for question & answer pair generation task on the lmqg/qag_squad (dataset_name: default) via lmqg. We will use the recipe Instructions to fine-tune our GPT-2 model and let us write recipes afterwards that we can cook. We will be using an already available fine-tuned BERT model from the Hugging Face Transformers library to answer questions based on the stories from the CoQA dataset. the model predicts start_positions and end_positions, indicating which tokens are at the start and the end of the answer). 4k • 184 google/tapas-large-finetuned-sqa Table Question Answering • Updated Nov 29, 2021 • 809 • 6 This is the AraElectra model, fine-tuned using the Arabic-SQuADv2. 0 International License (creativeml-openrail-m). There are two common forms of question answering: Extractive: extract the answer from the given context. Here are my two problems: The answer ends, and the rest of the tokens until it reaches max_new_tokens are all newlines. October 16, 2021. Some noteworthy use case examples for VQA include: 🎓 Prepare for the Machine Learning interview: https://mlexpert. ⚡⚡ If you’d like to save inference time, you can first use passage ranking models to see which Feb 15, 2024 · 544. IndoBERT is the Indonesian version of BERT model. obtaining embeddings for input text). Nov 22, 2021 · As explained in IBERT’s model card: fine-tuning the model consists of 3 stages: full precision fine-tuning; quantization; quantization-aware training. BART was propsed in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension . Beginners. Active community: The Hugging Face library has a vast and active user community, which means you can obtain assistance and support and contribute to the library’s growth. I want to be able to prompt the fine-tuned model with questions such as "How can MU improve?", or "What are MU's biggest weaknesses?" What data should I train the model on? Aug 25, 2021 · This script only allows you to do extractive question answering (i. HuggingFace provides pre-trained models, datasets, and Fine-tune: MRCQuestionAnswering; Language: Vietnamese, Englsih; Downstream-task: Extractive QA; Dataset (combine English and Vietnamese): Squad 2. incorrect_answers: A list of incorrect (false) answer strings. mT5 and T5 cast every NLP problem into a text-to-text format, so you can create training examples as: Question answering comes in many forms. How do i display the evaluation metrics like f1 and exact match? Model Card of lmqg/flan-t5-large-squad-qag. This means you must fine-tune your T5 model Frequently Asked Questions. Fine-tune a pretrained model in native PyTorch. edited Jul 18, 2022. 0 i would like to ask about evaluating question answering system, i know there is squad and squad_v2 metrics, how can we use them when fine-tune bert with pytorch? thank you. 8B for deployment, as long as your use is in accordance with the Apache 2. As data, we use the German Recipes Dataset, which consists of 12190 german recipes with metadata crawled from chefkoch. 0 documentation using TensorFlow Keras, model fit produces below problem and fail to start training. 1, we learned how to directly use the pre-trained BERT model in Hugging Face for question answering. pravingaikwad549 March 20, 2024, 8:28pm 1. I want to finetuning mt5-small for question answering in tamil using a translated SQuaD dataset with the run_qa_seq2seq script. 0. This guide will show you how to fine-tune DistilBERT on the SQuAD dataset for extractive question answering. Its input are question and context, and output is the answers derived from the context. We set the training arguments for model training and finally use the SFTtrainer() class to fine-tune the Llama-2 Mar 25, 2021 · Hey @EmuK, indeed most expositions of “question answering” are really referring to the simpler task of reading comprehension. February 15, 2024. In this setup, you're input is (basically) text, start_pos, end_pos triplets: Text. You can take a look at the example scripts as well as the official QA notebook. Abstractive: generate an answer from the context that correctly answers the question. We trained gpt2 model with pdf chunks and it’s not giving answers for the question. However, you can fine-tune mT5 for question-answering. The Stanford Question Answering Dataset (SQuAD) is a collection of 100k Nov 19, 2023 · Beginners. I have prepared a small FAQ dataset of questions and answers in JSON form, and following the alpaca dataset format, created a text column as my training data. Downstream-task: Extractive QA. 0 for Q&A downstream task. Overview. The problem is that now I’m trying to use my own files (formatted in SQuaD 2. Aug 17, 2020 · Interested in fine-tuning on your own custom datasets but unsure how to get going? I just added a tutorial to the docs with several examples that each walk you through downloading a dataset, preprocessing & tokenizing, and training with either Trainer, native PyTorch, or native TensorFlow 2. In this example, we’ll look at the particular type of extractive QA that involves answering a question about a passage by highlighting the segment of the passage that answers the question. Language: Arabic. Task Variants This place can be filled with variants of this task if there's any. Check the superclass documentation for the generic methods the library implements sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e. In this tutorial, we will be following Method 2 fine-tuning approach to build a Question Answering AI using context. Sep 5, 2023 · To experience the full capabilities of Infery-LLM, we invite you to get started today. Large language models (LLMs) like GPT, BART, etc. My end goal is to finetune GPT-Neo on Squad v2. correct_answers: A list of correct (truthful) answer strings. See the task Oct 17, 2023 · Simple fine-tuning: The Hugging Face library contains tools for fine-tuning pre-trained models on your dataset, saving you time and effort over training a model from scratch. ly/venelin-subscribe📖 Get SH*T Done with PyTorch Book: https:/ This is known as fine-tuning, an incredibly powerful training technique. 0 format) Dec 17, 2021 · I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). This model inherits from PreTrainedModel. Uncomment the following cell and run it. I already followed this guide and fine-tuned an english model by using the default train and dev file. So for the first step, you can fine-tune IBERT just as any BERT model. My dataset looks like: ‘context’ = … ‘question’ = … ‘answer’ = … I am getting some errors with this script. The answers are longer-form (2 to 3 sentences) and I want the model to output In this project, the SQuAD dataset is utilized to fine-tune a HuggingFace model (bert-large-uncased-whole-word-masking-finetuned-squad) for question answering. This guide will walk you through prerequisites and environment setup, setting up the model and tokenizer, and quantization configuration. And I would like to make it as a Q&A model, I tried to do finetuning following this document: Fine-tuning with custom datasets — transformers 4. This guide will show you how to: Finetune BERT on the regular configuration of the SWAG dataset to select the best answer given multiple options and some context. May 30, 2023 · Next, we will discuss how to fine-tune a BERT-based model using PyTorch for a question-answering task on a specific dataset. Model details. io🔔 Subscribe: http://bit. I have imported the model using the bnbconfig and added lora to it using loraconfig Feb 9, 2021 · However this model doesn't answer questions as accurate as others. qa_pairs = [. 🤗 Tasks: Question Answering. 0 license. 8B as a basis for your fine-tuned model, please conduct your own risk and bias assessment. Intended Use Fine-tuning. You may also further fine-tune and adapt Pythia-1B for deployment, as long as your use is in accordance with the Apache 2. It is also used as the last token of a sequence built with special tokens. Fine-tuning a masked language model is almost identical to fine-tuning a sequence classification model, like we did in Chapter 3. In this paper, we propose a parameter efficient framework for fine-tuning MLLM specifically tailored to Med-VQA applications, and empirically validate it on a public benchmark dataset. Jan 29, 2024 · Beginners. Oct 30, 2023 · In the previous lesson 4. If you're opening this notebook locally, make sure your environment has an install from the last version of those Oct 11, 2022 · My idea is to fine-tune the base layers of the deepset/deberta-v3-large-squad2 model using the MLM and/or NSP approach, and then to re-attach the “QA” layer (s) in hopes of adapting the model to my domain, keeping the QA “knowledge” from the last layer and, ultimately, keeping the manually QA labeling to a minimum (since we do have a Feb 17, 2023 · Define the questions and answers. Mar 15, 2021 · Hello everybody I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). 1 or squad2. Fine-tune a pretrained model in TensorFlow with Keras. What you’re probably looking for is either: open-domain question answering, where only the query is supplied at runtime and a retriever fetches relevant documents (i. It is designed to generate text responses in a question-answering format. When dealing with conversational questions, we're Dec 10, 2020 · Hi all! Looking to fine-tune a model for QA/Text-Generation (not sure how to frame this) and I’m wondering how to best prepare the dataset in a way that I can feed multiple answers to the same question? My goal is to facilitate the creation of a unique answer to a given question that is based on the input answers. The run_qa. When looking at the different Question Answering datasets on the Hugging Face site (squad, adversarial_qa, etc. We can see the training, validation and test sets all have Feb 17, 2023 · Define the questions and answers. The answers are longer-form (2 to 3 sentences) and I want the model to output Mar 15, 2021 · Hello everybody I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). Oct 26, 2021 · Dear list, What I like to do is to pretrain a model and finetune it for Q&A with Squad dataset. Dec 5, 2023 · Extractive Question Answering Tutorial With Hugging Face. We are looking to fine-tune a LLM model. Hi everyone, If you have enough compute you could fine tune BLOOM on any downstream task but you would need enough GPU RAM to store the model + gradient (optimizer state) which is quite costly. The tiny (67M) DistilBERT model is fine-tuned with the SQuAD dataset (see below) which contains data in this format: LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden states of the tokens, to predict the positions of the start and end tokens of the answer. The model is available under the Creative Commons Attribution-ShareAlike 4. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. Our goal is to refine Feb 9, 2024 · February 9, 2024. Inference May 15, 2021 · In this article, we will be working together on one such commonly used task—question answering. Dataset: mailong25/bert-vietnamese-question-answering. py script allows to fine-tune any model from our hub (as long as its architecture has a ForQuestionAnswering version in the library) on a question-answering dataset (such as SQuAD, or any other QA dataset available in the datasets library, or your own csv/jsonlines files) as long as they are structured the same way as SQuAD. 7. Time to look at question answering! This task comes in many flavors, but the one we’ll focus on in this section is called extractive question answering. nbroad October 7, 2021, 5:26pm 2. If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. I trained a model as a roberta after reading the blog on huggingface site. It is fine-tuned by FACTIFY5WQA dataset based on microsoft/deberta-v3-large model. A common tasks people fine-tune auto regressive models is Question Answering. See the question answering task page for more Dec 10, 2020 · Hi all! Looking to fine-tune a model for QA/Text-Generation (not sure how to frame this) and I’m wondering how to best prepare the dataset in a way that I can feed multiple answers to the same question? My goal is to facilitate the creation of a unique answer to a given question that is based on the input answers. Jan 21, 2024 · The model is based on the T5 architecture (google/flan-t5-large) and has undergone fine-tuning using the PEFT library. Examples include: Sequence classification (sentiment) – IMDb Token classification (NER) – W-NUT The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification. g. The following script applies LoRA and quantization settings (defined in the previous script) to the Llama-2-7b-chat-hf we imported from HuggingFace. Language model: AraElectra. Install Transformers and Datasets from Hugging Face! pip install transformers datasets You can use the Table Question Answering models to simulate SQL execution by inputting a table. You can use Question Answering (QA) models to automate the response to frequently asked questions by using a knowledge base (documents) as context. two sequences for sequence classification or for a text and a question for question answering. wy if ek fm tv qm wk xy jj fe