{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "-" } }, "source": [ "# Homework 1: Word statistics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is due on **Friday, Sept 16 (11:59pm)**, submitted electronically. 100 points total.\n", "\n", "## How to do this problem set\n", "\n", "Most of these questions require writing Python code and computing results, and the rest of them have textual answers. Write all the answers in this document. Once you are finished, you will upload this `.ipynb` file to Moodle.\n", "\n", "A few tips as you develop code:\n", "* *Enter* to edit a cell and *Ctrl-Enter* to re-run a cell. (see Help -> Keyboard Shortcuts)\n", "* When creating your final version of the problem set to hand in, please do a fresh restart with \"Kernel -> Reset\" and execute every cell in order. Then you'll be sure your code doesn't rely on weird global variables you forgot about. Make sure to press \"Save\"!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Your Name:** *write name here*\n", "\n", "**List collaborators:** *list here*\n", "\n", "(see our [grading and policies page](http://people.cs.umass.edu/~brenocon/inlp2016/grading.html) for details on our collaboration policy).\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "-" } }, "source": [ "## Part (A): Download dataset and load text\n", "\n", "You'll be working with a sample of the [IMDB Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/).\n", "\n", "[Here's the sample](http://people.cs.umass.edu/~brenocon/inlp2016/hw1/imdb_pos_sample.zip) for this assignment. It consists of 1136 positive reviews of movies. Download it and unzip it somewhere and look at a few documents to be sure you know what you're getting.\n", "\n", "***A1: Load documents (10 points):***\n", "\n", "Load the documents into a dictionary as described below. `os.listdir()` and `os.path.join()` may be helpful." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [], "source": [ "from __future__ import division\n", "import os,sys,re,math\n", "\n", "# This dictionary will hold all the documents.\n", "# Keys and values are intended to both be strings.\n", "# Keys: the filename\n", "# Values: the text content of the file\n", "\n", "fname2content = {} # {filename: text of file}\n", "\n", "#-------------------Don't modify the code above-------------------------\n", "#-------------------Provide your answer below--------------------------\n", "\n", "\n", "\n", "#-------------------Provide your answer above---------------------------\n", "#-------------------Don't modify the code below------------------------\n", "# or only minimally modify it in case your keys have a slightly different format\n", "print \"Number of documents loaded: \", len(fname2content)\n", "print fname2content[\"17_9.txt\"][:500]\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "-" } }, "source": [ "You should see the output of above question is as following:\n", "\n", "*Number of documents loaded: 1136
\n", "This is a complex film ...*" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "-" } }, "source": [ "## Part (B): Tokenize and count words\n", "\n", "The goal of this part is to perform simple preprocessing on the raw text and count the words.\n", "\n", "***B1: Naive tokenization (10 points):***\n", "\n", "For now, assume tokens are based on whitespace. Write code to calculate the total number of tokens -- this will have to iterate through documents and tokenize each of them." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total Number of tokens: 0\n" ] } ], "source": [ "total_token_num=0\n", "\n", "#-------------------Don't modify the code above-------------------------\n", "#-------------------Provide your answer bellow--------------------------\n", "\n", " \n", "#-------------------Provide your answer above---------------------------\n", "#-------------------Don't modify the code bellow------------------------\n", "\n", "print \"Total Number of tokens: \", total_token_num" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***B2: Better tokenization/normalization (10 points)***\n", "\n", "Please develop a better tokenizer and text normalizer yourself -- perhaps using regular expressions, as we discussed in class, and with case normalization. A good way to experiment with tokenizers is to run them on very small examples, so try to improve the examples below. (Once it works better on a small sample, you will run it on the larger corpus next.) Show that your new tokenizer gives better results than a naive tokenizer on these two example texts." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "## --- keep this code ---\n", "examples = [\"Hello, we are good.\", \"OK... I'll go here, ok?\"]\n", "\n", "print \"Naive tokenizations\"\n", "for example in examples:\n", " print example.split()\n", "\n", "## --- modify code below ---\n", "\n", "def better_tokenizer(text):\n", " return [] ## todo changeme\n", "\n", "print \"Better tokenizations\"\n", "for example in examples:\n", " print better_tokenizer(example)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***B3: Word count (10 points):***\n", "\n", "Count words from the corpus into the variable `word_counts`, using your `better_tokenizer`. We initialized the `word_counts` variable as a Python dict; feel free to use `defaultdict(lambda:0)` instead, which is slightly easier to use. (In the future you may want to check out `Counter`, but please practice dict or defaultdict here.)\n", "\n", "Print out \n", "1. The vocabulary size.\n", "2. The top 10 most common terms. \n", " \n", "Important functions to make this easy include dict's `.items()`, list's `.sort()` (and/or standalone `sorted()`) and the `key=` parameter on sort." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#-------------------Provide your answer below--------------------------\n", "word_counts = {} ## will contain {word: count}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part (C): Data visualization\n", "\n", "In this section, you will verify two key statistical properties of text: [Zipf's Law](https://en.wikipedia.org/wiki/Zipf%27s_law) and [Heaps' Law](http://en.wikipedia.org/wiki/Heaps'_law). You can check your results by comparing your plots to ones on Wikipedia; they should look qualitatively similar.\n", "\n", "***Question C1: Visualizing Zipf's Law (20 points):***\n", "\n", "Zipf's Law describes the relations between the frequency rank of words and frequency value of words. For a word $w$, its frequency is inversely proportional to its rank:\n", "\n", "$$count_w = K \\frac{1}{rank_w}$$\n", "or in other words\n", "$$\\log(count_w) = K - \\log(rank_w)$$\n", "\n", "for some constant $K$, specific to the corpus and how words are being defined.\n", "\n", "Therefore, if Zipf's Law holds, after sorting the words descending on frequency, word frequency decreases in an approximately linear fashion under a log-log scale.\n", "\n", "Please make such a log-log plot by ploting the rank versus frequency. Use a scatter plot where the x-axis is the *log(rank)*, and y-axis is log(frequency). You should get this information from `word_counts`; for example, you can take the individual word counts and sort them. dict methods `.items()` and/or `values()` may be useful. (Note that it doesn't really matter whether ranks start at 1 or 0 in terms of how the plot comes out.)\n", "\n", "**Please remember to label the meaning of the x-axis and y-axis.**\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#-------------------Modify code below------------------\n", "# sample plotting code; feel free to delete\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "fig = plt.figure()\n", "ax = plt.gca()\n", "ax.scatter( [1,2,3], [10,1000,3000], linewidth=2)\n", "plt.xlabel(\"my x axis\")\n", "plt.ylabel(\"my y axis\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Question C2: Interpreting a Zipf plot (5 points):***\n", "\n", "You should see some discountinuities on the left and right sides of this figure. Why are we seeing them on the left? Why are we seeing them on the right? On the right, what are those \"ledges\"?\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**ANSWER:**\n", "\n", "*answerme*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "***Question C3: Visualizing Heaps' Law (20 points):***\n", "\n", "Heaps' Law asserts (in its most basic version) that the vocabulary size of a corpus is approximately proportional to the square root of the number of tokens in the corpus:\n", "\n", "$$ |V| = K \\sqrt{N_{tok}} $$\n", "\n", "where $K$ is a constant specific to the corpus.\n", "\n", "We will investigate this phenomenon empirically by iterating over our corpus. Iterate over each document; for each, calculate the total number of tokens seen so far, and the total number of unique word types seen so far. We would like to know how these two numbers relate to each other as we see more and more of the corpus.\n", "\n", "Create a plot with a curve (lineplot or scatterplot, whatever you think better) where the x-axis is number of tokens, and y-axis is the vocbulary size (so far). **Make sure to label your axes.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#-------------------Provide your answer below--------------------------\n", "# one way to implement: maintain and update a set representing the vocabulary so far.\n", "seen_words=set()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***C4 Heaps' Law: glass half empty (5 points):***\n", "\n", "Why is growth in the vocabulary slower than growth in number of tokens, and why does it get slower and slower?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**ANSWER:**\n", "\n", "*answerme*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***C5 Heaps' Law: glass half full (5 points):***\n", "\n", "Imagine you obtained millions and millions of documents and repeated this experiment. How long would the vocabulary keep increasing?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**ANSWER:**\n", "\n", "*answerme*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***C6: Constants (5 points)***\n", " \n", "Heaps' Law has a constant in it. Describe something, in either the data or the analysis method, that would affect this constant. Explain why." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**ANSWER:**\n", "\n", "*answerme*" ] } ], "metadata": { "anaconda-cloud": {}, "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.11" } }, "nbformat": 4, "nbformat_minor": 0 }