jonwagner / Insight.Database

Fast, lightweight .NET micro-ORM

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Specified argument was out of the range of valid values.

fara-ru251 opened this issue · comments

Describe the bug

Hello, today I faced with something strange situation.
When inserting data as string to clob column, getting an error "Specified argument was out of the range of valid values.".
So the exception reported that problem is in "Oracle.ManagedDataAccess" library.

StackTrace wrote text as below:
at Oracle.ManagedDataAccess.Client.OracleParameter.set_Size(Int32 value)
at Insight.Database.DBCommandExtensions.AddParameters(IDbCommand cmd, Object parameters)
at Insight.Database.DBConnectionExtensions.CreateCommand(IDbConnection connection, String sql, Object parameters, CommandType commandType, Nullable1 commandTimeout, IDbTransaction transaction) at Insight.Database.DBConnectionExtensions.<>c__DisplayClass172_0.<Execute>b__1(IDbCommand _, IDataReader __) at Insight.Database.DBConnectionExtensions.ExecuteAndAutoClose[T](IDbConnection connection, Func2 getCommand, Func3 translate, CommandBehavior commandBehavior) at Insight.Database.DBConnectionExtensions.ExecuteAndAutoClose[T](IDbConnection connection, Func2 getCommand, Func3 translate, Boolean closeConnection) at Insight.Database.DBConnectionExtensions.Execute(IDbConnection connection, String sql, Object parameters, CommandType commandType, Boolean closeConnection, Nullable1 commandTimeout, IDbTransaction transaction, Object outputParameters)
at Insight.Database.DBConnectionExtensions.ExecuteSql(IDbConnection connection, String sql, Object parameters, Boolean closeConnection, Nullable`1 commandTimeout, IDbTransaction transaction, Object outputParameters)
at ReportService.Controllers.ARMReportController.UploadReportExtended(EBSReportARM Report, Config _serviceConfig, IServiceProvider _provider)


Table BINARYCHUNKS in DB as in image:
image

Steps to reproduce

var base64_data = Convert.ToBase64String(chunk.data);

  conn.ExecuteSql(@"INSERT INTO BINARYCHUNKS(ID, FILEID, HASH, DATA, DELETETIME)
  VALUES(:ID, :FILEID, :HASH, :DATA, :DELETETIME)",
                  new
                  {
                      ID = chunk.id,
                      FILEID = chunk.fileId,
                      HASH = chunk.hash,
                      DATA = base64_data,
                      DELETETIME = chunk.deleteTime
                  });

Any ideas or workaround?
UPD: this problem was mentioned in Dapper library too DapperLib/Dapper#142

Expected behavior

I expected that mapping with Oracle CLOB type and .NET type is String, so pass it as String

  • Dotnet version: [netcore3.1]
  • Database: [Oracle]
  • Library version: [6.3.7]

How long is the string?

Context: since the exception was thrown from Parameter.set_size, it's likely that the oracle driver has an issue with sending longer strings.

If that's the case, it might be possible to update insight to automatically convert the string value to a clob parameter.

Actually not so much 1MB, cause I read as chunks from file & wrote it one-by-one to buffer (length 1MB), then convert it to base64 string.

int chunkSize = 1024 * 1024; //1MB
byte[] bytes = new byte[chunkSize];
int numBytesToRead = (int)fsSource.Length;
int numBytesRead = 0;
while (numBytesToRead > 0)
{
    // Read may return anything from 0 to numBytesToRead.
    int n = fsSource.Read(bytes, numBytesRead, numBytesToRead);
    // Break when the end of the file is reached.
    if (n == 0)
        break;
    numBytesRead += n;
    numBytesToRead -= n;
    BinaryData chunk = new BinaryData
    {
        data = bytes,
        deleteTime = DateTime.Now,
        fileId = ReportId,
        hash = "",
        id = Guid.NewGuid().ToString()
    };
    reportDB.BinaryData.Add(chunk.id);
    reportDB.ChunkCount++;

    var base64_data = Convert.ToBase64String(bytes);

    conn.ExecuteSql(@"INSERT INTO BINARYCHUNKS(ID, FILEID, HASH, DATA, DELETETIME)
    VALUES(:ID, :FILEID, :HASH, :DATA, :DELETETIME)",
                    new
                    {
                        ID = chunk.id,
                        FILEID = chunk.fileId,
                        HASH = chunk.hash,
                        DATA = base64_data,
                        DELETETIME = chunk.deleteTime
                    });
}

This is fixed in 6.3.8