wyounas / homer

Homer, a text analyser in Python, can help make your text more clear, simple and useful for your readers.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Homer

Made with Python Python versions Pypi licence Issues

Homer is a Python package that can help make your text more clear, simple and useful for the reader. It provides information on an overall text as well as on individual paragraphs. It gives insights into readability, length of paragraphs, length of sentences, average sentences per paragraph, average words in a sentence, etc. It also tries to identify certain kind of vague words. It also tracks the frequency of "and" words in the text. (More information on all of these follows in the Acknowledgements section.)

This software package grew out of a personal need. Since I am not a native English speaker but am interested in writing, I designed and have been using Homer to improve my writing. I hope others will find it useful.

Please note that this is not a strict guide to control your writing. At least, I don't use it that way. I use it as a guide to make my writing as simple as possible. I strive to write concise paragraphs and sentences as well as use fewer unclear words, and Homer has been helping me.

I have only used it to analyze my blogs and essays and not the large corpus of text. As this software is new, you may well spot bugs, in which case please feel free to open up issues/pull-requests.

You can use Homer as a stand-alone package or on the command line. If you run it on the command line, you can get general stats on your article or essay as well as paragraph stats.

Running Homer from the command line gives the following insights about the article/essay:

  • Reading time in minutes (although this will vary some from reader to reader).
  • Readability scores (Flesch reading ease and Dale Chall readability scores).
  • Total paragraphs, sentences, and words.
  • Average sentences per paragraph.
  • Average words per sentence.
  • "and" frequency.
  • Number and list of compulsive hedgers, intensifiers, vague words.

overall stats

Paragraph stats point out the following information for each paragraph:

  • Number of sentences and words.
  • Average words per sentence.
  • The longest sentence in the paragraph.
  • Readability scores (Flesch reading ease and Dale Chall readability scores).
  • If the number of sentences is more than five in a paragraph, then Homer gives a warning highlighted in red.
  • Similarly, when the number of words is more than 25 in a sentence, then a warning highlighted in red is given.

paragraph stats

I built this on Python 3.4.5. So first we need to install Python.

On Mac, I used Homebrew to install Python e.g. one can use this command:

To install on Windows, you can download the installer from here. Once downloaded this installer can be run to complete Python's installation.

For Ubuntu you might find this resource useful.

Now it's time to create a virtual environment (assuming you cloned the code under ~/code/homer).

First line in the above snippet creates a virtual environment named venv under ~/code/homer. The second command activates the virtual environment.

In case you need more help with creating a virtual environment this resource can prove to be useful.

Install using Pip:

~/code/homer $ pip install homer-text

And that's it. It should install everything i.e. required libraries, NLTK packages and homer_text itself.

Prior to using it for the first time, make sure you have all nltk dictionary files:

import nltk
nltk.download('punkt')
nltk.download('cmudict')
nltk.download('stopwords')

A command line utility, under the homer directory, has been provided. Here is an example showing how to use it:

> python homer_cmd.py --name article_name --author lalala --file_path=/correct/path/to/file.txt

Both --name and --author are optional whereas file_path is mandatory.

You can also use Homer in your code. Here is an example:

# file: analyse.py
import sys
from homer.analyzer import Article
from homer.cmdline_printer import ArticlePrinter

article = Article('Article name', 'Author', open(sys.argv[1]).read())
ap = ArticlePrinter(article)
ap.print_article_stats()
ap.print_paragraph_stats()

Use it like this:

> python analyse.py text_to_analyse.md

Tests can be run from the tests directory.

Author:

Contributors:

  • Steven Pinker's The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century. This book gave me quite a few insights. It also prompted me to include tracking of vague words, complex hedgers and intensifiers.

    • Complex hedgers: These are words such as _apparently, almost, fairly, nearly, partially, predominantly, presumably, rather, relative, seemingly, etc._
    • Intensifiers: Words such as _very, highly, extremely.
  • Bankspeak:

    The Language of World Bank Reports, 1946–2012: https://litlab.stanford.edu/LiteraryLabPamphlet9.pdf. This source also gave me a few ideas. The idea to keep track of "and" and the vague words in a text was taken from here.

    • "and" frequency: Basically it is the number of times the word "and" is used in the text (given as a percentage of total text). I try to keep it under 3 %.
    • Vague words is a list of words I compiled after reading the above report. Using these words unnecessarily, or without giving them the proper context, can make a text more abstract. These are words such as _derivative, fair value, portfolio, evaluation, strategy, competitiveness, reform, growth, capacity, progress, stability, protection, access, sustainable, etc._

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate. Also, add your name under Authors section of the readme file.

MIT

About

Homer, a text analyser in Python, can help make your text more clear, simple and useful for your readers.

License:MIT License


Languages

Language:Python 100.0%