chbrown / liwc-python

Linguistic Inquiry and Word Count (LIWC) analyzer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support for LIWC2015

Marco299 opened this issue · comments

Does this package work with LIWC 2015 dictionary?

Hi! I don't have a copy of LIWC 2015, so I don't know. If the format is the same, but with different categories and match expressions, then it should work just fine, since none of that is hard-coded in this package, but if they use a different syntax, then probably not.

Thank you.

UPDATE

I confirmed that the library still works with LIWC2015_English_Flat.dic file. There's another file called LIWC2015_English.dic, which must be using different syntax.


I just tested with my 2015 dictionary, and I see the following error. I'd like to know if the error can be solved with small changes:

IndexError                                Traceback (most recent call last)
<ipython-input-2-fe14ed5dbb04> in <module>()
----> 1 parse, category_names = liwc.load_token_parser('./LIWC2015_English.dic')

~/anaconda/envs/suggestbot/lib/python3.6/site-packages/liwc/__init__.py in load_token_parser(filepath)
     74       the lexicon
     75     '''
---> 76     lexicon, category_names = read_dic(filepath)
     77     trie = _build_trie(lexicon)
     78     def parse_token(token):

~/anaconda/envs/suggestbot/lib/python3.6/site-packages/liwc/__init__.py in read_dic(filepath)
     22             elif mode == 1:
     23                 # definining categories
---> 24                 category_names.append(parts[1])
     25                 category_mapping[parts[0]] = parts[1]
     26             elif mode == 2:

IndexError: list index out of range

I had to tweak the code slightly for the LIWC2015_English_Flat.dic file. Because some string patterns are now comprised of multiple words (e.g., "kind of"), the split function on line 19 of __init__.py needs to use "\t" as the argument to only split on tabs. Otherwise you end up with duplicate keys in the dictionary (e.g., "of") and a run-time error as a result.

@keenemind Hi, I have changed the line 19 code in init.py in my python module path. but it does not work. Can you tell me more details?

thx

@kyoungrok0517 hi, have any solution?
thx

I have the same problem. I modified the __init__.py as f06e5b5. but does not work for me.

KeyError                                  Traceback (most recent call last)
<ipython-input-2-5bb9c9b19882> in <module>
----> 1 parse, category_names = liwc.load_token_parser('/Users/lawrencexu/Documents/data/MisInfoText/LIWC2015.dic')

~/Library/Python/3.7/lib/python/site-packages/liwc-0.4.1-py3.7.egg/liwc/__init__.py in load_token_parser(filepath)
     75       the lexicon
     76     '''
---> 77     lexicon, category_names = read_dic(filepath)
     78     trie = _build_trie(lexicon)
     79     def parse_token(token):

~/Library/Python/3.7/lib/python/site-packages/liwc-0.4.1-py3.7.egg/liwc/__init__.py in read_dic(filepath)
     26                 category_mapping[parts[0]] = parts[1]
     27             elif mode == 2:
---> 28                 lexicon[parts[0]] = [category_mapping[category_id] for category_id in parts[1:]]
     29     return lexicon, category_names
     30 

~/Library/Python/3.7/lib/python/site-packages/liwc-0.4.1-py3.7.egg/liwc/__init__.py in <listcomp>(.0)
     26                 category_mapping[parts[0]] = parts[1]
     27             elif mode == 2:
---> 28                 lexicon[parts[0]] = [category_mapping[category_id] for category_id in parts[1:]]
     29     return lexicon, category_names
     30 

KeyError: '30\t31\t120\t122'

Hi! First off, thanks to all of you for your interest and collaboration and helping each other out. I'm glad that people find this repo useful, but it's even more supportive and reassuring to see you guys helping fellow users diagnose and debug and work around the problems you encounter :)

I just published liwc-python==0.5.0 to PyPI, which I hope fixes some of the problems noted above, and especially hope doesn't break anything. Can you give that a try?

Two caveats that still apply to this latest version:

  1. Spaces (e.g. the infrequent LIWC bigram) are not handled specially, so if you're using a tokenization approach like in the README, it won't capture those. With the newer LIWC lexicon(s) that match multi-word expressions, I expect the right way to go forward is to integrate tokenization with the trie-based LIWC matching, but that'd be a considerable chunk of work.
  2. The parentheses in the (2015) LIWC lexicon are handled literally, in part because I'm not sure what they mean. I suspect they mean the parenthesized token is entirely optional? Also in part because using the trie without pre-tokenization is going to require some extensive re-engineering.

does this package have LIWC 2019 version?