coin-or / pulp

A python Linear Programming API

Home Page:http://coin-or.github.io/pulp/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does reading from mps file create a unique hashcode for the same named variables?

tle4336 opened this issue · comments

I would like to revisit this topic again (#155). Is there any ways for us to re-write the hashcodes created by multiple child processes by exporting the model (in .mps file) to a local folder, and then read it back immediately afterwards? I read here and point #1 said that "PuLP permits having variable names because it uses an internal code for each one. But we do not export that code. So we identify variables by their name only," and my current model has tens of thousands of variables with distinct names (yes, absolutely no overlap names) and thousands of constraints.

Based on that sentence, if I use multi-thread to add hundreds of constraints to a pulp object, and then at the end of the process, export that pulp object into a .mps file and then read that model back immediately (the whole purpose is to ensure that there is ONE hashcode for each of the same variables appearing in the objective function and constraints). I would like to ask for your expertise (@pchtsp @stumitchell ) if this would help solve the issue of "having two hashcodes for the same named variables that appear in objective function and some of the constraints, because those constraints are built in a multi-thread environment?"

Anyone could help me with this issue of having two distinct hashcodes being mapped to the same named variables, leading the pulp solver to treat them as two distinct variables rather than one unique variable? Like, is the way of circumventing this behavior is to export the model into the .mps file locally, and then read it back right afterwards so that a new, unique hashcode could be generated for the same named variables?