tabulapdf / tabula-extractor

Extract tables from PDF files

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Implement Bentley-Ottmann algorithm for detecting grids

jazzido opened this issue · comments

If we detect that a set of horizontal and vertical lines form a grid, we can construct the set of cells using Bentley-Ottmann.

See:

When ready, we should replace @jeremybmerrill 's TableGuesser.find_tables with Bentley-Ottman.

Instead of implementing Bentley, let's do it naively (O(n^2)) and see what happens

See this algorithm in Anssi Nurminen's master thesis:

array foundRectangles;
//All crossing-points have been sorted from up to down,
//and left to right in ascending order
Loop for each crossing-point:
{
   topLeft = NextCrossingPoint();
   //Fetch all points on the same vertical and horizontal
   //line with current crossing point
   array x_points = CrossingPointsDirectlyBelow( topLeft);
   array y_points = CrossingPointsDirectlyToTheRight( topLeft);
   Loop for each point x_point in x_points:
{
                               //Skip to next crossing-point
if( NOT EdgeExistsBetween( topLeft, x_point)) next crossing-
                                                   point;
Loop for each point y_point in y_points:
{
   if( NOT EdgeExistsBetween( topLeft, y_point)) next crossing-
                                                      point;
   //Hypothetical bottom right point of rectangle
   btmRight = Point( y_point.x(), x_point.y());
   if( CrossingPointExists( btmRight) AND
       EdgeExistsBetween( x_point, btmRight) AND
       EdgeExistsBetween( y_point, btmRight))
{
} }
//Rectangle is confirmed to have 4 sides
foundRectangles.append( Rectangle( topLeft, btmRight);
//Each crossing point can be the top left corner
//of only a single rectangle
next crossing-point;
} }

cf this too http://www.drdobbs.com/database/the-maximal-rectangle-problem/184410529 and "convex hulls"

Whatever I/we write that more efficiently transforms a collection of lines into (a) the tables and (b) their constituent cells should take the place of TableGuesser.find_tables and Spreadsheet.new

I would note that it might be better to detect cells (i.e. minimal rectangles) first, then piece together a Spreadsheet from that. We could then more elegantly deal with weirdly shaped tables.

Started a little bit implementing the nurminen algorithm in rectalgo branch; feel free to work on it if you want, don't if you don't.

Nurminen algo implemented in rectalgo and merged into spreadsheets, which is now up to date with text-extractor-refactor. I need to check if it, in fact, copes better with weirdly shaped tables, which I'll try to do tomorrow at work. If it does, then we can close this.

WHOOOOOOO!

Manuel Aristarán
http://jazzido.com

On Tue, Dec 3, 2013 at 12:25 AM, Jeremy B. Merrill <notifications@github.com

wrote:

Nurminen algo implemented. I need to check if it, in fact, copes better
with weirdly shaped tables, which I'll try to do tomorrow at work. If it
does, then we can close this.


Reply to this email directly or view it on GitHubhttps://github.com//issues/16#issuecomment-29680702
.

Oh, I realized I still have to write the algorithm to transform cells (minimal rectangles) into Spreadsheets (maximal rectangles or just Areas ). Do you know of any smart implementations of this anywhere? Otherwise I'll just write something naive that hopefully works.

hascells branch works on the China PDF I sent in email last week. That branch finds cells first, then builds a Spreadsheet out of that. Note that this is just a proof of concept and needs a lot of efficiency work and getting rid of silly hacks (especially around nearly intersecting lines).

See more details in the commit: ccbf671

Oh, and rectalgo is obsolete and can be deleted. It's merged into spreadsheets

Awesome, Jeremy. Thanks!

Can you write a short test method when you have the chance? (I'd like to figure out how to implement the spreadsheet extractor in the UI workflow)

Yeah, totally. I'll write the test tomorrow when I get to work. I think it'll work pretty well everywhere, but my tests so far are pretty limited.

As far as actually using it is concerned, most of the work is done for free. There's efficiency gains to be had (e.g. lazy table-extraction, etc.), but the simplest script looks just like this:

pdf_file_path = "whatever.pdf"
out = open("out.csv", 'w')
extractor = Tabula::Extraction::SpreadsheetExtractor.new(pdf_file_path, :all )
extractor.extract.each do |pdf_page, spreadsheet|
  out << spreadsheet.to_csv
  out << "\n\n"
end
out.close

BTW, O'Rourke's algorithm is beautiful.

that McGill demo, in particular, is pretty cool.

today was one of those days when I had Ruby, Java, JRuby and Python docs open all at once trying to convert the Python example into (J)Ruby.

As I said, the implementation is a little hairy because of the nearly-intersecting-lines problem, but, hey, it works for now.

today was one of those days when I had Ruby, Java, JRuby and Python docs open all at once trying to convert the Python example into (J)Ruby.

Aren't those the best days? ;)

Idea: now that we're getting serious with computational geometry algos, we should start to consider snap rounding for lines.

This'd fix #38?

On Tue, Dec 3, 2013 at 11:08 PM, Manuel Aristarán
notifications@github.comwrote:

Idea: now that we're getting serious with computational geometry algos, we
should start to consider snap roundinghttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.23.220for lines.


Reply to this email directly or view it on GitHubhttps://github.com//issues/16#issuecomment-29777555
.

It might, not sure.

Also, if we're going to snap-round lines we need to determine the size of the "pixel" first. If I'm not mistaken, an upper bound to that value would be the (average?) witdh of the lines present in our area of interest. We would need to track that value, we don't do it now.

Tests written in: d94ae3f

bin/tabula modified to use only SpreadsheetExtractor in: 7f1f000

bin/tabula modified to use only SpreadsheetExtractor in: 7f1f0007f1f000

So SpreadsheetExtractor also works for non-ruled tables?

Nope. That would be a good reason to retain the old stuff in bin/tabula.... :P

LOL.

Ok, so I guess we should have a ---spreadsheet flag that forces using the new extractor. And a way of detecting spreadsheet-y tables too.

yeah

working on that now

better now... fe88cb4

didn't write the heuristic. I haven't profiled the spreadsheets stuff; if it's quick enough, we can just do all/most of that work, see if there are a handful of Spreadsheet objects that were detected, and if so, use the SpreadsheetExtractor method.

I'm going to merge spreadsheets into text-extractor-refactor and delete it....let's keep working on —or branching— the latter.

What do you think?

Fine by me. Only objection is that text-extractor-refactor is a pain in the ass to type (all those left-hand letters!)... maybe a diff name?

Aight, new merge candidate is now pre07.

Closing this one.