langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

Home Page:https://dify.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AttributeError: 'NoneType' object has no attribute 'timestamp'

NiuBlibing opened this issue · comments

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.6.11

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

Create a workflow and send request to api.

✔️ Expected Behavior

Success for response.

❌ Actual Behavior

Error with error log:

2024-07-01 02:24:30.292 ERROR [Dummy-36] [app.py:838] - Exception on /v1/workflows/run [POST]
Traceback (most recent call last):
File "/app/api/controllers/service_api/app/workflow.py", line 45, in post
response = AppGenerateService.generate(
File "/app/api/services/app_generate_service.py", line 67, in generate
return WorkflowAppGenerator().generate(
File "/app/api/core/app/apps/workflow/app_generator.py", line 83, in generate
return self._generate(
File "/app/api/core/app/apps/workflow/app_generator.py", line 129, in _generate
response = self._handle_response(
File "/app/api/core/app/apps/workflow/app_generator.py", line 276, in _handle_response
return generate_task_pipeline.process()
File "/app/api/core/app/apps/workflow/generate_task_pipeline.py", line 111, in process
return self._to_blocking_response(generator)
File "/app/api/core/app/apps/workflow/generate_task_pipeline.py", line 119, in _to_blocking_response
for stream_response in generator:
File "/app/api/core/app/apps/workflow/generate_task_pipeline.py", line 197, in _process_stream_response
yield self._workflow_node_finish_to_stream_response(
File "/app/api/core/app/task_pipeline/workflow_cycle_manage.py", line 389, in _workflow_node_finish_to_stream_response
finished_at=int(workflow_node_execution.finished_at.timestamp()),
AttributeError: 'NoneType' object has no attribute 'timestamp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/usr/local/lib/python3.10/site-packages/flask_restful/__init__.py", line 489, in wrapper
resp = resource(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/flask/views.py", line 110, in view
return current_app.ensure_sync(self.dispatch_request)(**kwargs) # type: ignore[no-any-return]
File "/usr/local/lib/python3.10/site-packages/flask_restful/__init__.py", line 604, in dispatch_request
resp = meth(*args, **kwargs)
File "/app/api/controllers/service_api/wraps.py", line 75, in decorated_view
return view_func(*args, **kwargs)
File "/app/api/controllers/service_api/app/workflow.py", line 66, in post
raise InternalServerError()
werkzeug.exceptions.InternalServerError: 500 Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

The AttributeError you're encountering is due to the finished_at attribute being None when the code attempts to call the timestamp() method on it. This issue can be resolved by ensuring that the finished_at attribute is set correctly when the workflow node execution is completed.

Here are the steps to ensure the finished_at attribute is set:

  1. Check Workflow Node Execution Methods:
    Ensure that the finished_at attribute is set in the _workflow_node_execution_success and _workflow_node_execution_failed methods. These methods should be called when the workflow node execution completes successfully or fails.

    def _workflow_node_execution_success(self, workflow_node_execution: WorkflowNodeExecution,
                                         start_at: float,
                                         inputs: Optional[dict] = None,
                                         process_data: Optional[dict] = None,
                                         outputs: Optional[dict] = None,
                                         execution_metadata: Optional[dict] = None) -> WorkflowNodeExecution:
        # ... other code ...
        workflow_node_execution.finished_at = datetime.now(timezone.utc).replace(tzinfo=None)
        # ... other code ...
    
    def _workflow_node_execution_failed(self, workflow_node_execution: WorkflowNodeExecution,
                                        start_at: float,
                                        error: str,
                                        inputs: Optional[dict] = None,
                                        process_data: Optional[dict] = None,
                                        outputs: Optional[dict] = None,
                                        execution_metadata: Optional[dict] = None) -> WorkflowNodeExecution:
        # ... other code ...
        workflow_node_execution.finished_at = datetime.now(timezone.utc).replace(tzinfo=None)
        # ... other code ...
  2. Verify Method Calls:
    Ensure that these methods are being called correctly when the workflow node execution completes. If the finished_at attribute is still None, you might need to debug the code to check if these methods are being skipped or if there is an issue with the database commit.

  3. Check Workflow Service:
    In the WorkflowService class, ensure that the finished_at attribute is set in the run_draft_workflow_node method.

    def run_draft_workflow_node(self, app_model: App,
                                node_id: str,
                                user_inputs: dict,
                                account: Account) -> WorkflowNodeExecution:
        # ... other code ...
        if node_run_result.status == WorkflowNodeExecutionStatus.SUCCEEDED:
            workflow_node_execution = WorkflowNodeExecution(
                # ... other attributes ...
                finished_at=datetime.now(timezone.utc).replace(tzinfo=None)
            )
        else:
            workflow_node_execution = WorkflowNodeExecution(
                # ... other attributes ...
                finished_at=datetime.now(timezone.utc).replace(tzinfo=None)
            )
        db.session.add(workflow_node_execution)
        db.session.commit()
        return workflow_node_execution
  4. Additional Logging:
    Add additional logging to verify that these methods are being executed and that the finished_at attribute is being set as expected.

By ensuring that the finished_at attribute is set correctly in all relevant parts of your code, you should be able to resolve the AttributeError [1][2][3].

To continue talking to Dosu, mention @dosu.

[2024-07-01 10:26:49 +0000] [276] [ERROR] Error handling request
Traceback (most recent call last):
  File "/app/api/.venv/lib/python3.10/site-packages/gunicorn/workers/base_async.py", line 115, in handle_request
    for item in respiter:
  File "/app/api/.venv/lib/python3.10/site-packages/werkzeug/wsgi.py", line 256, in __next__
    return self._next()
  File "/app/api/.venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 32, in _iter_encoded
    for item in iterable:
  File "/app/api/.venv/lib/python3.10/site-packages/flask/helpers.py", line 113, in generator
    yield from gen
  File "/app/api/libs/helper.py", line 162, in generate
    yield from response
  File "/app/api/core/app/apps/base_app_generate_response_converter.py", line 28, in _generate
    for chunk in cls.convert_stream_full_response(response):
  File "/app/api/core/app/apps/workflow/generate_response_converter.py", line 45, in convert_stream_full_response
    for chunk in stream_response:
  File "/app/api/core/app/apps/workflow/generate_task_pipeline.py", line 158, in _to_stream_response
    for stream_response in generator:
  File "/app/api/core/app/apps/workflow/generate_task_pipeline.py", line 186, in _process_stream_response
    workflow_node_execution = self._handle_node_start(event)
  File "/app/api/core/app/task_pipeline/workflow_cycle_manage.py", line 444, in _handle_node_start
    workflow_node_execution = self._init_node_execution_from_workflow_run(
  File "/app/api/core/app/task_pipeline/workflow_cycle_manage.py", line 199, in _init_node_execution_from_workflow_run
    tenant_id=workflow_run.tenant_id,
AttributeError: 'NoneType' object has no attribute 'tenant_id'
[2024-07-01 08:27:40 +0000] [276] [ERROR] Error handling request
Traceback (most recent call last):
File "/app/api/.venv/lib/python3.10/site-packages/gunicorn/workers/base_async.py", line 115, in handle_request
for item in respiter:
File "/app/api/.venv/lib/python3.10/site-packages/werkzeug/wsgi.py", line 256, in __next__
return self._next()
File "/app/api/.venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 32, in _iter_encoded
for item in iterable:
File "/app/api/.venv/lib/python3.10/site-packages/flask/helpers.py", line 113, in generator
yield from gen
File "/app/api/libs/helper.py", line 162, in generate
yield from response
File "/app/api/core/app/apps/base_app_generate_response_converter.py", line 28, in _generate
for chunk in cls.convert_stream_full_response(response):
File "/app/api/core/app/apps/workflow/generate_response_converter.py", line 45, in convert_stream_full_response
for chunk in stream_response:
File "/app/api/core/app/apps/workflow/generate_task_pipeline.py", line 158, in _to_stream_response
for stream_response in generator:
File "/app/api/core/app/apps/workflow/generate_task_pipeline.py", line 201, in _process_stream_response
workflow_node_execution = self._handle_node_finished(event)
File "/app/api/core/app/task_pipeline/workflow_cycle_manage.py", line 516, in _handle_node_finished
usage_dict = outputs.get('usage', {})
AttributeError: 'NoneType' object has no attribute 'get'

Other fileds may also be none, there is something wrong in the function.
@dosu.

The errors you're encountering indicate that workflow_run and outputs are None in the workflow_cycle_manage.py file. Here are the specific conditions under which these can be None:

  1. workflow_run can be None:

    • When the workflow_run is queried from the database and no matching record is found:
      workflow_run = db.session.query(WorkflowRun).filter(
          WorkflowRun.id == self._task_state.workflow_run_id).first()
      if not workflow_run:
          return None
  2. outputs can be None:

    • When there is no latest_node_execution_info available in the task state:
      if self._task_state.latest_node_execution_info:
          workflow_node_execution = db.session.query(WorkflowNodeExecution).filter(
              WorkflowNodeExecution.id == self._task_state.latest_node_execution_info.workflow_node_execution_id).first()
          outputs = workflow_node_execution.outputs
      else:
          outputs = None

To address these issues, ensure that these conditions are handled properly in your code. For example, you can add checks to handle None values before accessing their attributes:

if workflow_run is None:
    # Handle the case where workflow_run is None
    return

if outputs is None:
    # Handle the case where outputs is None
    return

Additionally, you might want to review the initialization and management of the workflow_node_execution object to ensure it is correctly set up and not leading to NoneType errors [1].

To continue talking to Dosu, mention @dosu.

@takatost Could you have a look?
Seems sqlalchemy's session reusing makes the caching corrucupt:

Session with connection 0x7feb1599c7c0 update id 113fa6ab-63aa-4091-ace5-b3b9c4ed50c8, and then another newly created session is using another connection 0x7feb1599c540 to query the same id, which has a outdated cache which missing finished_at field.


2024-07-03 10:21:16,276.276 DEBUG [Thread-102 (_generate_worker)] [base.py:971] - Connection <connection object at 0x7feb1599c540; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> being returned to pool
2024-07-03 10:21:16,278.278 DEBUG [Thread-102 (_generate_worker)] [base.py:1428] - Connection <connection object at 0x7feb1599c540; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> rollback-on-return
2024-07-03 10:21:20,275.275 DEBUG [Thread-101 (process_request_thread)] [base.py:738] - Connection <connection object at 0x7feb1599c7c0; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> checked out from pool
2024-07-03 10:21:20,277.277 DEBUG [Thread-101 (process_request_thread)] [base.py:1296] - Pool pre-ping on connection <connection object at 0x7feb1599c7c0; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0>
2024-07-03 10:21:20,283.283 INFO [Thread-101 (process_request_thread)] [base.py:1099] - BEGIN (implicit)
2024-07-03 10:21:20,285.285 INFO [Thread-101 (process_request_thread)] [base.py:1899] - SELECT workflow_node_executions.id AS workflow_node_executions_id, workflow_node_executions.tenant_id AS workflow_node_executions_tenant_id, workflow_node_executions.app_id AS workflow_node_executions_app_id, workflow_node_executions.workflow_id AS workflow_node_executions_workflow_id, workflow_node_executions.triggered_from AS workflow_node_executions_triggered_from, workflow_node_executions.workflow_run_id AS workflow_node_executions_workflow_run_id, workflow_node_executions.index AS workflow_node_executions_index, workflow_node_executions.predecessor_node_id AS workflow_node_executions_predecessor_node_id, workflow_node_executions.node_id AS workflow_node_executions_node_id, workflow_node_executions.node_type AS workflow_node_executions_node_type, workflow_node_executions.title AS workflow_node_executions_title, workflow_node_executions.inputs AS workflow_node_executions_inputs, workflow_node_executions.process_data AS workflow_node_executions_process_data, workflow_node_executions.outputs AS workflow_node_executions_outputs, workflow_node_executions.status AS workflow_node_executions_status, workflow_node_executions.error AS workflow_node_executions_error, workflow_node_executions.elapsed_time AS workflow_node_executions_elapsed_time, workflow_node_executions.execution_metadata AS workflow_node_executions_execution_metadata, workflow_node_executions.created_at AS workflow_node_executions_created_at, workflow_node_executions.created_by_role AS workflow_node_executions_created_by_role, workflow_node_executions.created_by AS workflow_node_executions_created_by, workflow_node_executions.finished_at AS workflow_node_executions_finished_at 
FROM workflow_node_executions 
WHERE workflow_node_executions.id = %(id_1)s::UUID 
 LIMIT %(param_1)s
2024-07-03 10:21:20,287.287 INFO [Thread-101 (process_request_thread)] [base.py:1904] - [cached since 562.1s ago] {'id_1': '113fa6ab-63aa-4091-ace5-b3b9c4ed50c8', 'param_1': 1}
2024-07-03 10:21:20,359.359 DEBUG [Thread-101 (process_request_thread)] [cursor.py:1527] - Col ('workflow_node_executions_id', 'workflow_node_executions_tenant_id', 'workflow_node_executions_app_id', 'workflow_node_executions_workflow_id', 'workflow_node_executions_triggered_from', 'workflow_node_executions_workflow_run_id', 'workflow_node_executions_index', 'workflow_node_executions_predecessor_node_id', 'workflow_node_executions_node_id', 'workflow_node_executions_node_type', 'workflow_node_executions_title', 'workflow_node_executions_inputs', 'workflow_node_executions_process_data', 'workflow_node_executions_outputs', 'workflow_node_executions_status', 'workflow_node_executions_error', 'workflow_node_executions_elapsed_time', 'workflow_node_executions_execution_metadata', 'workflow_node_executions_created_at', 'workflow_node_executions_created_by_role', 'workflow_node_executions_created_by', 'workflow_node_executions_finished_at')
2024-07-03 10:21:20,361.361 DEBUG [Thread-101 (process_request_thread)] [cursor.py:1468] - Row ('113fa6ab-63aa-4091-ace5-b3b9c4ed50c8', 'dfe95fe3-b8c2-4958-9493-f2c8e2b1a292', 'c5c87941-9942-49b7-a08d-e0d48142c3ab', 'f14bc60d-02bc-4c80-aa30-f8731ccabd33', 'workflow-run', '02727813-7b6a-4d6c-ab06-769e7362dcb1', 3, '1719539748424', '1719539864128', 'llm', 'LLM', None, None, None, 'running', None, 0.0, None, datetime.datetime(2024, 7, 3, 10, 21, 15, 963098), 'end_user', '9a0d4268-6e8f-49ac-bc63-6fa358be181a', None)
2024-07-03 10:21:20,368.368 INFO [Thread-101 (process_request_thread)] [base.py:1899] - UPDATE workflow_node_executions SET process_data=%(process_data)s, outputs=%(outputs)s, status=%(status)s, elapsed_time=%(elapsed_time)s, execution_metadata=%(execution_metadata)s, finished_at=%(finished_at)s WHERE workflow_node_executions.id = %(workflow_node_executions_id)s::UUID
2024-07-03 10:21:20,369.369 INFO [Thread-101 (process_request_thread)] [base.py:1904] - [cached since 558.2s ago] {'process_data': '{"model_mode": "chat", "prompts": [{"role": "system", "text": "xxxxx ... (3158 characters truncated) ... xxxxxx ... (1263 characters truncated) ... mpletion_price_unit": "0", "completion_price": "0E-7", "total_tokens": 482, "total_price": "0E-7", "currency": "USD", "latency": 3.9399318600044353}}', 'status': 'succeeded', 'elapsed_time': 4.264478583005257, 'execution_metadata': '{"total_tokens": 482, "total_price": "0.0000000", "currency": "USD"}', 'finished_at': datetime.datetime(2024, 7, 3, 10, 21, 20, 366503), 'workflow_node_executions_id': '113fa6ab-63aa-4091-ace5-b3b9c4ed50c8'}
2024-07-03 10:21:20,378.378 INFO [Thread-101 (process_request_thread)] [base.py:1142] - COMMIT
2024-07-03 10:21:20,382.382 DEBUG [Thread-101 (process_request_thread)] [base.py:971] - Connection <connection object at 0x7feb1599c7c0; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> being returned to pool
2024-07-03 10:21:20,384.384 DEBUG [Thread-101 (process_request_thread)] [base.py:1428] - Connection <connection object at 0x7feb1599c7c0; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> rollback-on-return
2024-07-03 10:21:20,386.386 DEBUG [Thread-101 (process_request_thread)] [base.py:738] - Connection <connection object at 0x7feb1599c540; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> checked out from pool
2024-07-03 10:21:20,388.388 DEBUG [Thread-101 (process_request_thread)] [base.py:1296] - Pool pre-ping on connection <connection object at 0x7feb1599c540; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0>
2024-07-03 10:21:20,391.391 INFO [Thread-101 (process_request_thread)] [base.py:1099] - BEGIN (implicit)
2024-07-03 10:21:20,394.394 INFO [Thread-101 (process_request_thread)] [base.py:1899] - SELECT workflow_node_executions.id, workflow_node_executions.tenant_id, workflow_node_executions.app_id, workflow_node_executions.workflow_id, workflow_node_executions.triggered_from, workflow_node_executions.workflow_run_id, workflow_node_executions.index, workflow_node_executions.predecessor_node_id, workflow_node_executions.node_id, workflow_node_executions.node_type, workflow_node_executions.title, workflow_node_executions.inputs, workflow_node_executions.process_data, workflow_node_executions.outputs, workflow_node_executions.status, workflow_node_executions.error, workflow_node_executions.elapsed_time, workflow_node_executions.execution_metadata, workflow_node_executions.created_at, workflow_node_executions.created_by_role, workflow_node_executions.created_by, workflow_node_executions.finished_at 
FROM workflow_node_executions 
WHERE workflow_node_executions.id = %(pk_1)s::UUID
2024-07-03 10:21:20,396.396 INFO [Thread-101 (process_request_thread)] [base.py:1904] - [cached since 562.4s ago] {'pk_1': '113fa6ab-63aa-4091-ace5-b3b9c4ed50c8'}
2024-07-03 10:21:20,403.403 DEBUG [Thread-101 (process_request_thread)] [cursor.py:1527] - Col ('id', 'tenant_id', 'app_id', 'workflow_id', 'triggered_from', 'workflow_run_id', 'index', 'predecessor_node_id', 'node_id', 'node_type', 'title', 'inputs', 'process_data', 'outputs', 'status', 'error', 'elapsed_time', 'execution_metadata', 'created_at', 'created_by_role', 'created_by', 'finished_at')
2024-07-03 10:21:20,407.407 DEBUG [Thread-101 (process_request_thread)] [cursor.py:1468] - Row ('113fa6ab-63aa-4091-ace5-b3b9c4ed50c8', 'dfe95fe3-b8c2-4958-9493-f2c8e2b1a292', 'c5c87941-9942-49b7-a08d-e0d48142c3ab', 'f14bc60d-02bc-4c80-aa30-f8731ccabd33', 'workflow-run', '02727813-7b6a-4d6c-ab06-769e7362dcb1', 3, '1719539748424', '1719539864128', 'llm', 'LLM', None, None, None, 'running', None, 0.0, None, datetime.datetime(2024, 7, 3, 10, 21, 15, 963098), 'end_user', '9a0d4268-6e8f-49ac-bc63-6fa358be181a', None)
2024-07-03 10:21:20,412.412 INFO [Thread-101 (process_request_thread)] [base.py:1125] - ROLLBACK
2024-07-03 10:21:20,415.415 DEBUG [Thread-101 (process_request_thread)] [base.py:971] - Connection <connection object at 0x7feb1599c540; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> being returned to pool
2024-07-03 10:21:20,418.418 DEBUG [Thread-101 (process_request_thread)] [base.py:1428] - Connection <connection object at 0x7feb1599c540; dsn: 'user=dify_user password=xxx dbname=dify host=10.0.49.28 port=5432 options='-c timezone=UTC' keepalives=1 keepalives_idle=10', closed: 0> rollback-on-return
2024-07-03 10:21:20,539.539 ERROR [Thread-101 (process_request_thread)] [_internal.py:97] - Error on request:
Traceback (most recent call last):
  File "/home/test/ref/llm/dify/api/.venv/lib/python3.10/site-packages/werkzeug/serving.py", line 363, in run_wsgi
    execute(self.server.app)
  File "/home/test/ref/llm/dify/api/.venv/lib/python3.10/site-packages/werkzeug/serving.py", line 326, in execute
    for data in application_iter:
  File "/home/test/ref/llm/dify/api/.venv/lib/python3.10/site-packages/werkzeug/wsgi.py", line 256, in __next__
    return self._next()
  File "/home/test/ref/llm/dify/api/.venv/lib/python3.10/site-packages/werkzeug/wrappers/response.py", line 32, in _iter_encoded
    for item in iterable:
  File "/home/test/ref/llm/dify/api/.venv/lib/python3.10/site-packages/flask/helpers.py", line 113, in generator
    yield from gen
  File "/home/test/ref/llm/dify/api/libs/helper.py", line 162, in generate
    yield from response
  File "/home/test/ref/llm/dify/api/core/app/apps/base_app_generate_response_converter.py", line 28, in _generate
    for chunk in cls.convert_stream_full_response(response):
  File "/home/test/ref/llm/dify/api/core/app/apps/workflow/generate_response_converter.py", line 45, in convert_stream_full_response
    for chunk in stream_response:
  File "/home/test/ref/llm/dify/api/core/app/apps/workflow/generate_task_pipeline.py", line 158, in _to_stream_response
    for stream_response in generator:
  File "/home/test/ref/llm/dify/api/core/app/apps/workflow/generate_task_pipeline.py", line 203, in _process_stream_response
    yield self._workflow_node_finish_to_stream_response(
  File "/home/test/ref/llm/dify/api/core/app/task_pipeline/workflow_cycle_manage.py", line 428, in _workflow_node_finish_to_stream_response
    finished_at=int(workflow_node_execution.finished_at.timestamp()),
AttributeError: 'NoneType' object has no attribute 'timestamp'

@dosu.

Maybe we need close_resets_only or expire_all param for session

Maybe found the reason. My postgresql is using pgpool with loadbalance(bitnami' postgresql-ha chart), it is not strict consistency between master and slave db.

@takatost, this one's a bit out of my wheelhouse, could you step in and provide your guidance?

To continue talking to Dosu, mention @dosu.