It is also possible to write a job in any programming language, such as Python or C, that operates on tab-separated key-value pairs. The same example done above with Hive and Pig can also be written in Python and submitted as a Hadoop job using Hadoop Streaming. Submitting a job with Hadoop Streaming requires writing a mapper and a reducer. The mapper reads input line by line and generates key-value pairs for the reducer to “reduce” into some sort of sensible data. For our case, the mapper will read in lines and output the year as the key and a ‘1’ as the value if the ngram in the line it reads has only appeared in a single volume. The python code to do this is:
(Save this file as map.py)
#!/usr/bin/env python2.7 import fileinput for line in fileinput.input(): arr = line.split("\t") try: if int(arr) == 1: print("\t".join([arr, '1'])) except IndexError: pass except ValueError: pass
Now that the mapper has done this, the reduce merely needs to sum the values based on the key:
(Save this file as red.py)
#!/usr/bin/env python2.7 import fileinput data = dict() for line in fileinput.input(): arr = line.split("\t") if arr not in data.keys(): data[arr] = int(arr) else: data[arr] = data[arr] + int(arr) for key in data: print("\t".join([key, str(data[key])]))
Submitting this streaming job can be done by running the below command:
Pig is similar to Hive and can do the same thing. The Pig code to do this is a little bit longer due to its design. However, writing long Pig code is generally easier that writing multiple SQL queries that that chain together, since Pig’s language, PigLatin, allows for variables and other high-level constructs.
# Open the interactive pig console pig -Dtez.job.queuename=<your_queue> # Load the data ngrams = LOAD '/var/ngrams' USING PigStorage('\t') AS (ngram:chararray, year:int, count:long, volumes:long); # Look at the schema of the ngrams variable describe ngrams; # Count the total number of rows (should be 1430731493) ngrp = GROUP ngrams ALL; count = FOREACH ngrp GENERATE COUNT(ngrams); DUMP count; # Select the number of words, by year, that have only appeared in a single volume one_volume = FILTER ngrams BY volumes == 1; by_year = GROUP one_volume BY year; yearly_count = FOREACH by_year GENERATE group, COUNT(one_volume); DUMP yearly_count;
The last few lines of output should look like this:
More information on Pig can be found on the Apache website.
Another way to run Hadoop jobs is through mrjob. Mrjob is useful for testing out smaller data on another system (such as your laptop), and later being able to run it on something larger, like a Hadoop cluster. To run an mrjob on your laptop, you can simply remove the “-r hadoop” from the command in the example we use here.
A classic example is a word count, taken from the official mrjob documentation here.
Save this file as mrjob_test.py.
"""The classic MapReduce job: count the frequency of words. """ from mrjob.job import MRJob import re WORD_RE = re.compile(r"[\w']+") class MRWordFreqCount(MRJob): def mapper(self, _, line): for word in WORD_RE.findall(line): yield (word.lower(), 1) def combiner(self, word, counts): yield (word, sum(counts)) def reducer(self, word, counts): yield (word, sum(counts)) if __name__ == '__main__': MRWordFreqCount.run()
Then, run the following command:
python mrjob_test.py -r hadoop /etc/motd
You should receive an output with the word count of the file /etc/motd. You can also try this with any other file you have that contains text.