{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "_On49_jMY2P_" }, "source": [ "## Homework 4: Using BERT for Text Classification\n", "\n", "For [UMass CS485, Fall 2023](https://people.cs.umass.edu/~brenocon/cs485_f23/)\n", "\n", "### Submit via Gradescope as a PDF (File>Print>Save as PDF) and as a Jupyter Notebook (.ipynb). 50 points total (plus extra credit).\n", "\n", "Due Sunday Dec 3. Please finish ahead of time so you have time to prepare your presentations!\n", "\n", "---\n", "\n", "##### *How to do this problem set:*\n", "\n", "- Some questions require writing Python code and computing results, and the rest of them have written answers. For coding problems, you will have to fill out all code blocks that say `YOUR CODE HERE!`.\n", "\n", "- For text-based answers, you should replace the text that says \"WRITE YOUR ANSWER HERE\" with your actual answer.\n", "\n", "---\n", "\n", "##### *How to submit this problem set:*\n", "- Write all the answers in this CoLab notebook, and submit both as PDF and a Jupyter Notebook.\n", "\n", " 1. Once you are finished, generate a PDF via (File -> Print -> Save as PDF) and upload it to Gradescope's \"HW4 PDF Submission\" entry.\n", "\n", " 2. Also generate a Jupyter Notebook (.ipynb) via (File -> Download -> Download .ipynb) and upload it to Gradescope's \"HW4 Code Submission\" entry.\n", "\n", "- **Important:** Check your PDF before you submit to Gradescope to make sure it exported correctly. If Colab gets confused about your syntax, it will sometimes terminate the PDF creation routine early.\n", "\n", "- **Important:** On Gradescope, please make sure that you tag each page with the corresponding question(s). This makes it significantly easier for our graders to grade submissions, especially with the long outputs of many of these cells. We will take off points for submissions that are not tagged.\n", "\n", "- When creating your final version of the PDF to hand in, please do a fresh restart and execute every cell in order. One handy way to do this is by clicking `Runtime -> Run All` in the Notebook menu. *Make sure to attach a GPU.*\n", "---\n", "##### *Computing Resources*\n", "- Google CoLab provides free access to a GPU for up to 12 hours of continuous use. If you exceed this limit, you will not be able to access a GPU for some time. There's no guarantee on when you'll regain access, but generally it will take several hours.\n", "- *This assignment needs nowhere near 12 hours of GPU computing.*\n", "- Avoid leaving your notebook idling with a GPU attached, this is any easy way to rack up GPU usage without meaning to.\n", "---" ] }, { "cell_type": "markdown", "metadata": { "id": "DKnuraijYTXi" }, "source": [ "# Part 0: Setup\n" ] }, { "cell_type": "markdown", "metadata": { "id": "rbvUWP4DZcwK" }, "source": [ "## Adding a hardware accelerator\n", "The purpose of this homework is for you to become familiar with using large-scale pretrained lanuage models such as BERT. Since models such as BERT are large neural networks, we will need to attach a GPU for this assignment; otherwise, training and extracting features will take a very long time.\n", "\n", "To attach and use a GPU in this CoLab notebook, complete the following steps:\n", "\n", "1. First, attach a GPU by navigating the CoLab menu as follows: \n", "`Edit > Notebook Settings > Hardware accelerator > (GPU)`\n", "\n", "2. Then, set the `use_gpu` flag in the following code cell to `True`\n", "\n", "3. Finally, confirm that a GPU is detected (or *not* detected) by running the following code cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "L15Jm70wZxmZ" }, "outputs": [], "source": [ "import torch\n", "\n", "use_gpu = True # Change this flag as needed\n", "\n", "if use_gpu:\n", " # Check the GPU is detected\n", " if not torch.cuda.is_available():\n", " print(\"ERROR: No GPU detected. Please add a GPU; if you're using Colab, use their UI.\")\n", " assert False\n", " # Get the GPU device name.\n", " device_name = torch.cuda.get_device_name()\n", " n_gpu = torch.cuda.device_count()\n", " print(\"Found device: {}, n_gpu: {}\".format(device_name, n_gpu))\n", "else:\n", " # Check that no GPU is detected\n", " if torch.cuda.is_available():\n", " print(\"ERROR: GPU detected.\")\n", " print(\"Remove the GPU or set the use_gpu flag to True.\")\n", " assert False\n", " print(\"No GPU found. Using CPU.\")\n", " print(\"WARNING: Without a GPU, your code will be extremely slow.\")" ] }, { "cell_type": "markdown", "metadata": { "id": "tW6QWGF_ZzZA" }, "source": [ "Note that attaching a GPU to an active notebook (and vice versa) will reset the notebook's runtime." ] }, { "cell_type": "markdown", "metadata": { "id": "JgSDLMxwZ5b-" }, "source": [ "## Installing 🤗 Hugging Face packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "FxNFkKquZ9y8" }, "outputs": [], "source": [ "!pip install transformers==4.24.0\n", "!pip install datasets==2.7.1\n", "!pip install evaluate==0.3.0" ] }, { "cell_type": "markdown", "metadata": { "id": "gady3WkKaT0b" }, "source": [ "## Import numpy\n", "We will be using numpy arrays in part of this assignment. Feel free to use the numpy package anywhere within the assignment." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YQ4qhhEdaVND" }, "outputs": [], "source": [ "import numpy" ] }, { "cell_type": "markdown", "metadata": { "id": "uWZZjLKvaYYi" }, "source": [ "## Define pretrained BERT model\n", "Throughout this assignment, we'll use the `bert-base-uncased` pretrained model from 🤗 Hugging Face. This pretrained model uses the \"base\" (12-layer) architecture for BERT and preprocesses texts such that they are lowercased (and accent marks are stripped). See the model [documentation](https://huggingface.co/bert-base-uncased) for more details." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "iAZB0f2CaZ1c" }, "outputs": [], "source": [ "pretrained_bert = 'bert-base-uncased'" ] }, { "cell_type": "markdown", "metadata": { "id": "S49oG4DRatQu" }, "source": [ "##Load our working corpus, a movie review dataset\n", "For this assignment, we'll use another subsample of the Large Movie Review Dataset (Maas et al. ACL 2011); we used some of it in HW1. Note that this time we will load the dataset using the HuggingFace datasets package. Additionally, in this version, positive reviews are labeled as `1` and negative reviews as `0`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nDveDr7hapSy" }, "outputs": [], "source": [ "from datasets import load_dataset\n", "\n", "dataset = load_dataset(\"imdb\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Dzpmdr1KcphN" }, "outputs": [], "source": [ "NUM_TRAIN = 750\n", "NUM_DEV = 250\n", "NUM_TEST = 250\n", "\n", "def build_split(dataset, n_samples, offset=0):\n", " class_size = n_samples // 2\n", " # Get negative samples\n", " texts = dataset['text'][offset:class_size+offset]\n", " labels = dataset['label'][offset:class_size+offset]\n", " # Get positive samples\n", " texts += dataset['text'][-offset-class_size:]\n", " labels += dataset['label'][-offset-class_size:]\n", " if offset:\n", " texts = texts[:-offset]\n", " labels = labels[:-offset]\n", " return texts, labels\n", "\n", "\n", "# Training data\n", "train_texts, train_labels = build_split(dataset['train'], NUM_TRAIN)\n", "test_texts, test_labels = build_split(dataset['test'], NUM_TEST)\n", "dev_texts, dev_labels = build_split(dataset['test'], NUM_DEV, offset=NUM_TEST)\n", "\n", "print(\"train split: {} reviews\".format(len(train_labels)))\n", "print(\"dev split: {} reviews\".format(len(dev_labels)))\n", "print(\"test split: {} reviews\".format(len(test_labels)))" ] }, { "cell_type": "markdown", "metadata": { "id": "XyLGFGdv_NAt" }, "source": [ "## Define confidence interval method\n", "For this assignment we will compute confidence intervals for accuracy measurements using a normal approximation. If you used the bootstrap, it would calculate a very similar CI." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ZmG0SIym_MP7" }, "outputs": [], "source": [ "import scipy\n", "\n", "def get_confidence_intervals(accuracy, sample_size, confidence_level):\n", " \"\"\" calling this with arguments (0.8, 100, .95) returns\n", " the lower and upper bounds of a 95% confidence interval\n", " around the accuracy of 0.8 on a test set of size 100.\"\"\"\n", " z_score = -1 * scipy.stats.norm.ppf((1-confidence_level)/2)\n", " standard_error = numpy.sqrt(accuracy * (1-accuracy) / sample_size)\n", " lower_ci = accuracy - standard_error*z_score\n", " upper_ci = accuracy + standard_error*z_score\n", " return lower_ci, upper_ci" ] }, { "cell_type": "code", "source": [ "# Example: if you had 80% accuracy on an N=250 sized test set, your CI is [75.0%...85.0%]\n", "get_confidence_intervals(0.8, 250, .95)" ], "metadata": { "id": "MHc4ep09fAin" }, "execution_count": null, "outputs": [] }, { "cell_type": "code", "source": [ "# Example: For a much larger test set, your CI is much smaller\n", "get_confidence_intervals(0.8, 10000, .95)" ], "metadata": { "id": "ZsPyNFjxfTi9" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "-YKyvsuYXmFQ" }, "source": [ "# Part 1: Using BERT features for Text Classification (25 points)\n", "In this part, we'll use extracted BERT features for text classification. We will extract these features from the raw hidden states of different layers." ] }, { "cell_type": "markdown", "metadata": { "id": "Oqj4z9-vQBka" }, "source": [ "##Checking for a GPU\n", "While this part of the homework can be run without a GPU, it will take much longer. Specifically, extracting the hidden states from each layer in our pretrained BERT model in Question 1.1 will take over 30 minutes with a CPU, but only a few minutes with a GPU.\n", "\n", "Refer back to section \"Adding hardware accelerator\" in Part 0.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tCfCgPMkQBkb" }, "outputs": [], "source": [ "if not torch.cuda.is_available():\n", " print(\"WARNING: No GPU detected. Add a GPU.\")\n", "else:\n", " print(\"GPU detected.\")" ] }, { "cell_type": "markdown", "metadata": { "id": "-QDvo4invMXx" }, "source": [ "## Loading BERT model\n", "For this part, we'll use a pretrained BERT model, specifically the 🤗 `BertModel` that outputs the raw hidden states of BERT without any specific head top. Refer to the 🤗 [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) for more detail." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NEUPq3BPYQ1b" }, "outputs": [], "source": [ "from transformers import AutoTokenizer, BertModel\n", "\n", "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "tokenizer = AutoTokenizer.from_pretrained(pretrained_bert)\n", "model = BertModel.from_pretrained(pretrained_bert,\n", " output_hidden_states=True).to(device)" ] }, { "cell_type": "markdown", "metadata": { "id": "KD87dL4yvHWM" }, "source": [ "## Question 1.1 (5 points)\n", "First, we need to extract BERT features for each document in our dataset (i.e., movie review). For each document, we'll feed its (truncated) text into BERT and extract the raw hidden states of the [CLS] token for all 12 layers of the model to use as our features. We'll use the function `extract_bert_features` to extract these features for a collection of texts. The function `extract_bert_features` takes a list of texts `input_text` as input and outputs a numpy array corresponding to the extracted features of these texts." ] }, { "cell_type": "markdown", "metadata": { "id": "Eb_ZBIiuxerZ" }, "source": [ "In the following code cell, complete the implementation of `extract_bert_features`. More specifically, your code must extract the raw hidden states of the [CLS] token for each layer from `hidden_states` and arrange these into the `feature` variable such that it is a numpy array with shape (# layers = 12, hidden_size = 768).\n", "\n", "HINTS\n", "- The `hidden_states` of a `BertModel` is a tuple of length 13 rather than 12 becaue it also contains the embedding layer of BERT. The hidden states for the embedding layer are the *first* element in `hidden_states` followed by the hidden states of the following layers (from 1 to 12).\n", "- The hidden states for each layer within `hidden_states` (i.e. an element of `hidden_states`) are represented as an array with the following shape (# batches, # tokens, hidden_size = 768). We are only running a single batch through BERT, so each layers hidden state array will have a shape of (1, # tokens, hidden_size = 768).\n", "- To convert a PyTorch tensor to a numpy array, use the following command `[tensor].detach().cpu().numpy()`\n", "- Use the `torch.stack` and `numpy.stack` to \"stack\" Pytorch tensors and Numpy arrays along a new dimension. By default this will be the first dimension of the resulting array. (See documentation: [PyTorch](https://pytorch.org/docs/stable/generated/torch.stack.html), [numpy](https://numpy.org/doc/stable/reference/generated/numpy.stack.html))\n", "- It will take several minutes to extract the features for our training and test sets. Generally, it should be take under 3 minutes using a GPU. (It will take *much* longer using a CPU, over 30 minutes)\n", "- Consider using a CPU while writing /debugging your code; just make sure to quit early (e.g., after extracting a single features for a single document or the hidden states for a single layer)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VUiNV-1srZqo" }, "outputs": [], "source": [ "def extract_bert_features(input_texts):\n", " features = []\n", " for i, text in enumerate(input_texts):\n", " input = tokenizer.encode(text, truncation=True,\n", " return_tensors=\"pt\").to(device)\n", " hidden_states = model(input).hidden_states\n", " feature = None\n", " # YOUR CODE HERE!\n", "\n", "\n", " assert feature.shape == (12, 768)\n", " features.append(feature)\n", "\n", " return numpy.stack(features)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hmAdJqNSyQ8o" }, "outputs": [], "source": [ "# Extract features for the training and test sets\n", "from timeit import default_timer as timer\n", "\n", "start = timer()\n", "train_features = extract_bert_features(train_texts)\n", "test_features = extract_bert_features(test_texts)\n", "end = timer()\n", "print(\"Extracted features in {:.1f} minutes\".format((end-start)/60))\n", "\n", "assert train_features.shape == (NUM_TRAIN, 12, 768)\n", "assert test_features.shape == (NUM_TEST, 12, 768)" ] }, { "cell_type": "markdown", "metadata": { "id": "2LE5y_FB-HtG" }, "source": [ "## Question 1.2 (5 points)\n", "BERT accepts token sequences up to 512 tokens in length (including special tokens). In order to handle longer movie reviews, we *truncated* these reviews to 510 tokens." ] }, { "cell_type": "markdown", "metadata": { "id": "hYzJ-a0nv3-q" }, "source": [ "#### Question 1.2.1 (2 points)\n", "How often are reviews in our dataset truncated? In the following code cell, write code that calculates the number of reviews truncated in the training and test splits (i.e., `train_texts`, `test_texts`)\n", "\n", "HINT: Use the tokenizer's [`tokenize`](https://huggingface.co/docs/transformers/v4.24.0/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.tokenize) method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ozO2b5GN-IVE" }, "outputs": [], "source": [ "train_truncated = 0\n", "test_truncated = 0\n", "\n", "# YOUR CODE HERE!\n", "\n", "\n", "print(\"train: {} reviews truncated\".format(train_truncated))\n", "print(\"test: {} reviews truncated\".format(test_truncated))" ] }, { "cell_type": "markdown", "metadata": { "id": "-0uo4XSF-U-7" }, "source": [ "### Question 1.2.2 (3 points)\n", "Why might truncation be problematic for our classification task? Explain your reasoning." ] }, { "cell_type": "markdown", "metadata": { "id": "mjs1gg4a-bGw" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "3Pbhal_W-f3f" }, "source": [ "## Question 1.3 (5 points)\n", "Now, let's compare the performance of the extracted features from different layers. For each layer, use the layer's hidden states stored in `train_features` to train a [`LogisticRegression`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) model, then predict the labels of the test reviews using the layer's extracted features in `test_features`. Store these predictions in `y_pred`, so the code cell will print the resulting classification accuracy for each layer's features.\n", "\n", "HINT: For all the layers, you should get accuracies in the 60s and 70s." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oG4obrNL-RNi" }, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression\n", "\n", "for i in range(12):\n", " y_pred = None\n", "\n", " lr_model = LogisticRegression(max_iter=1000)\n", " # YOUR CODE HERE!\n", "\n", "\n", " acc = (y_pred == test_labels).sum()/len(test_labels)\n", " print(\"Layer {}: {:.3f} accuracy, 95% CI [{:.3f}, {:.3f}]\".format(i+1, acc, *get_confidence_intervals(acc, NUM_TEST, 0.95)))" ] }, { "cell_type": "markdown", "metadata": { "id": "q6vqS58xLYVd" }, "source": [ "## Question 1.4 (5 points)\n", "According to your results from Question 1.3, which layers perform best and which perform worst? Taking into consideration the 95% confidence intervals of the test accuracy results, are the performance differences between layers appear that significant / meaningful? Explain your reasoning." ] }, { "cell_type": "markdown", "metadata": { "id": "J5dIZZ6TL3DY" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "17nWQomP-klj" }, "source": [ "## Question 1.5 (5 points)\n", "In this problem, we represented a text by the extracted BERT features of the [CLS] token. However, there are other strategies. A popular option is to *average* the embeddings of all tokens of the input sequence. Do you think these alternative features will be more suitable better for our classification task than using the [CLS] features? Why or why not?" ] }, { "cell_type": "markdown", "metadata": { "id": "poQSnEAb-kp3" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "x58bz8GY-owH" }, "source": [ "## Question 1.6 (Extra Credit: 5 points)\n", "How could we construct a 768-dimensional embedding for a long movie review without truncating the review? Design and describe a method for doing so." ] }, { "cell_type": "markdown", "metadata": { "id": "u2P_MoY5OOI0" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "KnTDV7_0-4o1" }, "source": [ "# Part 2: Fine-Tuning BERT for Text Classification (25 points)\n", "In this part, we'll perform the same text classification task as Part 1, but this time we'll fine-tune BERT rather than using extracted BERT features.\n", "\n", "**Be sure to use a GPU for this portion of the homework.**" ] }, { "cell_type": "markdown", "metadata": { "id": "a84NwRzbYE72" }, "source": [ "##Checking for a GPU\n", "In this part of the homework we will need a GPU, otherwise it'll take a really long time to extract features. Refer back to section \"Adding hardware accelerator\" in Part 0." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JWqTFKPMXiLs" }, "outputs": [], "source": [ "if not torch.cuda.is_available():\n", " print(\"ERROR: No GPU detected. Add a GPU.\")\n", " assert torch.cuda.is_available()" ] }, { "cell_type": "markdown", "metadata": { "id": "CXgMIhxh-_j9" }, "source": [ "## Question 2.1 (5 points)\n", "When fine-tuning BERT, we need to choose our hyperparameters carefully. In order to perform a proper hyperparameter search, we need a validation set.\n", "Why is it important to have a distinct validation set in addition to our training and test sets?" ] }, { "cell_type": "markdown", "metadata": { "id": "t-9GDJrp-_mH" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "n7To_tZp_dQa" }, "source": [ "##Setup: Preparing our dataset for fine-tuning BERT\n" ] }, { "cell_type": "markdown", "metadata": { "id": "1MpJwMM0GiW-" }, "source": [ "### Preparing our datasets for fine-tuning BERT" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sznK5UGEGmUT" }, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(pretrained_bert)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "52aDAa_BGogX" }, "outputs": [], "source": [ "from torch.utils.data import Dataset, DataLoader\n", "\n", "class MovieReviewDataset(torch.utils.data.Dataset):\n", " def __init__(self, encodings, labels):\n", " self.encodings = encodings\n", " self.labels = labels\n", " self.tokenizer = tokenizer\n", "\n", " def __getitem__(self, idx):\n", " item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\n", " item['labels'] = torch.tensor(self.labels[idx])\n", " return item\n", "\n", " def __len__(self):\n", " return len(self.labels)\n", "\n", "train_encodings = tokenizer(train_texts, truncation=True)\n", "dev_encodings = tokenizer(dev_texts, truncation=True)\n", "test_encodings = tokenizer(test_texts, truncation=True)\n", "\n", "train_dataset = MovieReviewDataset(train_encodings, train_labels)\n", "dev_dataset = MovieReviewDataset(dev_encodings, dev_labels)\n", "test_dataset = MovieReviewDataset(test_encodings, test_labels)" ] }, { "cell_type": "markdown", "metadata": { "id": "Y5XjfebZGqB4" }, "source": [ "### Defining a method to support computing and reporting metrics" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jI_S7ZXkGqlj" }, "outputs": [], "source": [ "# Source: https://huggingface.co/transformers/training.html\n", "import evaluate\n", "\n", "metric = evaluate.load(\"accuracy\")\n", "\n", "def compute_metrics(eval_pred):\n", " logits, labels = eval_pred\n", " predictions = numpy.argmax(logits, axis=-1)\n", " return metric.compute(predictions=predictions, references=labels)" ] }, { "cell_type": "markdown", "metadata": { "id": "9SjLN_e5PBpu" }, "source": [ "### Defining a method for instantiating BERT Model for fine-tuning procedure" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ulT4U2TNPTMd" }, "outputs": [], "source": [ "from transformers import AutoModelForSequenceClassification\n", "\n", "def model_init():\n", " return AutoModelForSequenceClassification.from_pretrained(\n", " pretrained_bert, num_labels=2)" ] }, { "cell_type": "markdown", "metadata": { "id": "U-mWYURhGsox" }, "source": [ "## Fine-tuning BERT\n", "We can fine-tune our BERT model using a `Trainer` object ([documentation](https://huggingface.co/transformers/main_classes/trainer.html)). To build a `Trainer` object, we need to provide a `TrainingArguments` object ([documentation](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)), which is where we can specify hyperparameter settings and other training details.\n", "\n", "Running the following code cell should take around 3 minutes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "BqfjkIe4Gukv" }, "outputs": [], "source": [ "from transformers import Trainer, TrainingArguments\n", "\n", "training_args = TrainingArguments(\n", " output_dir='./results', # output directory\n", " num_train_epochs=2, # total number of training epochs\n", " per_device_train_batch_size=8, # batch size per device during training\n", " per_device_eval_batch_size=64, # batch size for evaluation\n", " evaluation_strategy=\"epoch\", # evaluation occurs after each epoch\n", " logging_dir='./logs', # directory for storing logs\n", " logging_strategy=\"epoch\", # logging occurs after each epoch\n", " log_level=\"error\", # set logging level\n", " optim=\"adamw_torch\", # use pytorch's adamw implementation\n", " # YOUR CODE HERE!\n", "\n", ")\n", "\n", "trainer = Trainer(\n", " model_init=model_init, # method instantiates model to be trained\n", " args=training_args, # training arguments, defined above\n", " train_dataset=train_dataset, # training dataset\n", " eval_dataset=dev_dataset, # evaluation dataset\n", " compute_metrics=compute_metrics, # function to be used in evaluation\n", " tokenizer=tokenizer, # enable dynamic padding\n", ")\n", "\n", "trainer.train()\n", "val_accuracy = trainer.evaluate()['eval_accuracy']\n", "\n", "print()\n", "print()\n", "print(\"FINAL: Validation Accuracy {:.3f}, 95% CI [{:.3f}, {:.3f}]\".format(val_accuracy, *get_confidence_intervals(val_accuracy, NUM_DEV, 0.95)))" ] }, { "cell_type": "markdown", "metadata": { "id": "WA86GDy0GyAJ" }, "source": [ "## Question 2.2 (5 points)\n", "Selecting a good learning rate is very important for fine-tuning. Among all the many hyperparameters, you should at least try varying this one. Let's try adjusting the learning rate to see which setting performs best. **Fine-tune BERT with the following learning rates: 2e-5, 3e-5, 4e-5, 5e-5.** To adjust the learning rate, set the `learning_rate` parameter of your `TrainingArguments` object. By default, it's `5e-5`.\n", "\n", "Report the resulting validation accuracy (i.e., after last epoch of training) for each learning rate in the table below." ] }, { "cell_type": "markdown", "metadata": { "id": "p8PZkM5RG7l6" }, "source": [ "| Learning Rate | Validation Accuracy (%) | 95% Confidence Interval (%) |\n", "| :-: | :-: | :-: |\n", "| 2e-5 | ??.? | \\[ ??.?, ??.? \\] |\n", "| 3e-5 | ??.? | \\[ ??.?, ??.? \\] |\n", "| 4e-5 | ??.? | \\[ ??.?, ??.? \\] |\n", "| 5e-5 | ??.? | \\[ ??.?, ??.? \\] |" ] }, { "cell_type": "markdown", "metadata": { "id": "RU7StShjHk6W" }, "source": [ "Which of these learning rates performs the best with respect to validation accuracy? Taking into consideration the 95% confidence intervals of the test accuracy results, how meaningful / significant are these differences in performance? Explain your reasoning." ] }, { "cell_type": "markdown", "metadata": { "id": "SAsXGVhkHpbN" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "B8TZfpiAHrJT" }, "source": [ "## Question 2.3 (5 points)\n", "Random initializations can also affect fine-tuning performance. By default, the random seed of the Trainer is set to `42`. Using the *best* performing learning late from Question 2.2, **fine-tune BERT with three additional random seeds of your choice.** To adjust the random seed, set the `seed` parameter of your `TrainingArguments` object.\n", "\n", "Report the resulting validation accuracy (i.e., after last epoch of training) and 95% confidence interval for each random seed in the table below." ] }, { "cell_type": "markdown", "metadata": { "id": "pyJm_hTvHtTo" }, "source": [ "| Random Seed | Validation Accuracy (%) | 95% Confidence Interval (%) |\n", "| :-: | :-: | :-: |\n", "| 42 | ??.? | \\[ ??.?, ??.? \\] |\n", "| ? | ??.? | \\[ ??.?, ??.? \\] |\n", "| ? | ??.? | \\[ ??.?, ??.? \\] |\n", "| ? | ??.? | \\[ ??.?, ??.? \\] |" ] }, { "cell_type": "markdown", "metadata": { "id": "FE4e4ejPH7_S" }, "source": [ "Which of these random seeds performs the best with respect to validation accuracy? How do these differences compare with the variation seen in Question 2.2? Explain your reasoning." ] }, { "cell_type": "markdown", "metadata": { "id": "SieJeGpOH9sE" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "8Aa-q5NbSRE6" }, "source": [ "## Question 2.4 (5 points)\n", "In Questions 2.2 and 2.3 we changed two different hyperparameters that can impact the performance of our models. However, we change them individually while keeping the other fixed. Let's see how the random seeds from Question 2.3 affect your *worst* performing learning rate from Question 2.2.\n", "\n", "Report the resulting validation accuracy (i.e., after last epoch of training) and 95% confidence interval for each random seed in the table below." ] }, { "cell_type": "markdown", "metadata": { "id": "46Cib6wkU5gH" }, "source": [ "| Random Seed | Validation Accuracy (%) | 95% Confidence Interval (%) |\n", "| :-: | :-: | :-: |\n", "| 42 | ??.? | \\[ ??.?, ??.? \\] |\n", "| ? | ??.? | \\[ ??.?, ??.? \\] |\n", "| ? | ??.? | \\[ ??.?, ??.? \\] |\n", "| ? | ??.? | \\[ ??.?, ??.? \\] |" ] }, { "cell_type": "markdown", "metadata": { "id": "AXP_iiKDVB6P" }, "source": [ "Given these results and those from Question 2.3, can the random seed of the Trainer affect which learning rate seems best? Explain your reasoning. " ] }, { "cell_type": "markdown", "metadata": { "id": "cahgahWMVAE_" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "markdown", "metadata": { "id": "DZgUTa1TIADL" }, "source": [ "## Question 2.5 (5 points)\n", "Now that we've performed our hyperparameter search, let's see how well your fine-tuned model performs on our test set. Add your best hyperparameter settings (determined by Questions 2.2-2.4) to the code cell below to fine-tune BERT and then compute the test accuracy for your fine-tuned model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hHKjTJBsG5As" }, "outputs": [], "source": [ "best_training_args = TrainingArguments(\n", " output_dir='./results', # output directory\n", " num_train_epochs=2, # total number of training epochs\n", " per_device_train_batch_size=8, # batch size per device during training\n", " per_device_eval_batch_size=64, # batch size for evaluation\n", " evaluation_strategy=\"epoch\", # evaluation occurs after each epoch\n", " logging_dir='./logs', # directory for storing logs\n", " logging_strategy=\"epoch\", # logging occurs after each epoch\n", " log_level=\"error\", # set logging level\n", " optim=\"adamw_torch\", # use pytorch's adamw implementation\n", " # YOUR CODE HERE!\n", "\n", ")\n", "\n", "best_trainer = Trainer(\n", " model_init=model_init, # method instantiates model to be trained\n", " args=best_training_args, # training arguments, defined above\n", " train_dataset=train_dataset, # training dataset\n", " eval_dataset=dev_dataset, # evaluation dataset\n", " compute_metrics=compute_metrics, # function to be used in evaluation\n", " tokenizer=tokenizer, # enable dynamic padding\n", ")\n", "\n", "best_trainer.train()\n", "\n", "# Print test accuracy\n", "print()\n", "print()\n", "test_accuracy = best_trainer.evaluate(test_dataset)['eval_accuracy']\n", "print(\"Test Accuracy {:.3f}, 95% CI [{:.3f}, {:.3f}]\".format(test_accuracy, *get_confidence_intervals(test_accuracy, NUM_TEST, 0.95)))" ] }, { "cell_type": "markdown", "metadata": { "id": "f3ZQ53MGIJWT" }, "source": [ "Although both your fine-tuned BERT classifier and the one you built in Part 1 rely on the [CLS] token, they have radically different performance. Explain why fine-tuning BERT greatly outperforms the results from Question 1.3." ] }, { "cell_type": "markdown", "metadata": { "id": "DRH9V3HXILEN" }, "source": [ "**WRITE YOUR ANSWER HERE**" ] }, { "cell_type": "code", "source": [ "## asdf" ], "metadata": { "id": "GmnlGdXifklR" }, "execution_count": null, "outputs": [] }, { "cell_type": "markdown", "source": [ "# Part 3, Extra credit: Generative LLMs\n", "\n", "This section is extra credit, to explore large language models.\n", "\n", "## Question 3.1 (up to 10 points EC)\n", "\n", "Choose a generative language model that has an API (e.g. ChatGPT), set the temperature to 0, and come up with two questions that it answers incorrectly (the questions cannot be related to facts after the pre-training date for the model; e.g. 2021 for GPT4). Then, use one of the prompt engineering strategies linked from the schedule page to get the language model to output the correct answer. In your writeup, for each question, list the original question and answer outputted by the model through the API, describe the prompt engineering strategy, and list the new inputs to and outputs from the model which describe a correct answer.\n", "\n", "**WRITE YOUR CODE/ANSWER HERE**\n", "\n", "\n", "## Question 3.2 (up to 10 points EC)\n", "\n", "Choose a generative language model you can run yourself (not a remote API), where you can access the probability distribution for the next word $p(w_t | w_{t-d}..w_{t-1})$. We suggest a model from the [Pythia](https://github.com/EleutherAI/pythia) project; on HuggingFace, you could try, for example, [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped), Llama, etc.\n", "\n", "Implement greedy decoding, top-k sampling, nucleus (top-p) sampling, and possibly beam search (trickier). Come up with 3 different sequences of text. For each sequence of text, investigate generated text outputs from greedy decoding, from beam search by varying the number of beams from 1 to a large number, from top-k sampling by varying k, and from nucleus (top-p) sampling by varying p. For each, discuss observations for the quality of generated text among greedy decoding, beam search, top-k sampling, and nucleus sampling (which lead to better generated text?). For each of beam search, top-k sampling, and nucleus sampling, discuss observations for the quality of generated text when varying hyperparameters (number of beams, k, p). What are patterns that you can find? What are reasons behind these observations (e.g. what are reasons behind what performs better or worse?)?\n", "What would you have expected the results to be like before these experiments and do the observations from these experiments match these expectations?\n", "\n", "\n", "**WRITE YOUR CODE/ANSWER HERE**\n", "\n" ], "metadata": { "id": "oquAljv2gCMJ" } } ], "metadata": { "colab": { "provenance": [], "toc_visible": true, "gpuType": "T4" }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" }, "accelerator": "GPU" }, "nbformat": 4, "nbformat_minor": 0 }