blehnen / DotNetWorkQueue

A work queue for dot.net with SQL server, SQLite, Redis and PostGreSQL backends

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No way to create Named Logs anymore??

PeterOscarsson opened this issue · comments

Hi

I have updated to 0.6.2 from 0.5.
As the interface ILogProvider has been removed, is there any new way to create "named" loggers??
The ILogProvider interface worked very nice together with Microsoft.Extensions.Logging.

Am I just missing something, or is the functionality removed in 0.6.2??

The original logging interface was from LibLog (https://github.com/damianh/LibLog). The author choose to discontinue the project, so I went to a basic logging provider instead, which unfortunately does not support named loggers.

At some point, I will be switching to Microsoft.Extensions.Logging.Abstractions as the logging interface, I just haven't gotten around to making the switch.

The change to switch to the MS abstraction was pretty straight forward;

https://github.com/blehnen/DotNetWorkQueue/tree/microsoft.extensions.logging.abstractions

At some point soon, I'll merge this branch and create a new release; Not surprisingly, this is a breaking change around injecting the configured log provider;

I'll update the wiki at that time, but injecting the MS factory can be done like this -

.Register(() => logFactory, LifeStyles.Singleton);

The queue simply uses the Microsoft NullFactory/NullLogger if nothing is injected.

Oh, thats great news. Thanx. I was thinking on giving it a try myself, but I am having another, bigger, issue with DotNetWorkQueues and NET 6.
Since upgrading our project to NET6, DNWQ has started to throw System.StackOverflowException (and the task count in TaskScheduler is > 18000)
I am working on a reproducable sample, but have not managed to get the error in the sample yet.
I'll come back if I find a way to reproduce the Issue.

Polly (https://github.com/App-vNext/Polly) bulkheads are used to limit the task scheduler. My samples are currently not targeting dot net 6, but I'll see If I can get that switched over tonight perhaps.

Polly does not have a specific target for dot.net 6, so I imagine it uses the dot.net standard DLL instead.

I've added trace logging to the scheduler; it will now log the current count and free slots as items are added and finished.

Ok. The bulkhead had 19 free slots when the StackOverflow occured. I will try to pinpoint it down tonight.
It happens rarly on Windows, but frequently if I run in a Linux Container, and VERY Frequently if running in a Kubernets Cluster (with cpu and memory constraints)
The problem is that it only occurs in our full application that uses the Job Scheduler. My testprogram has NO issues!

I took a quick look while eating lunch.

I think I see the problem, though I'm not 100% sure. Under load, its possible for too many items to be queued. And, due to how the bulkhead exception is thrown from inside the new task, weird things happen.

This is due to the Polly Bulkhead not really meshing well with my thread limited task scheduler.

So - I've removed the bulkhead. The side affect of this -

  1. The 'memory queue' feature for the async task scheduler has been removed. This was dependent on Polly bulkheads. I might revisit this in the future, but I'm not sure if anyone even used it (I did not). This is breaking change if the property on the configuration object was being set, as the property has been removed.

It takes a good 4 1/2 hours for all of the integration tests to run. If they all pass, I might be able to push out a new nuget tonight. This was a slightly risky change, so it's possible the nuget will be delayed by a day or more.

I've pushed out version 0.6.3; hopefully this addresses the task count issue in the TaskScheduler.

Hi and thanx for your new version.
The Logging interface was great, but I am still missing one very nice feature from the "OLD" logging, and that is that the naming of the logger was the same as the name of the QUEUE. This was making it very easy to spot any config issues, when I could see the name of the Queue as the Name of the Logger.

Tested the new version with the stackoverflow issue and its still there.

Stack dump if you want to think some more over lunch. Maybe you see something

Stack overflow.
Repeat 32711 times:

at DotNetWorkQueue.TaskScheduling.SmartThreadPoolTaskScheduler.QueueTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.Task.ScheduleAndStart(Boolean)

at System.Threading.Tasks.ContinueWithTaskContinuation.Run(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.Task.ContinueWithCore(System.Threading.Tasks.Task, System.Threading.Tasks.TaskScheduler, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions)
at Microsoft.Data.SqlClient.ConcurrentQueueSemaphore.WaitAsync(System.Threading.CancellationToken)
at Microsoft.Data.SqlClient.SNI.SNISslStream+d__3.MoveNext()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[Microsoft.Data.SqlClient.SNI.SNISslStream+d__3, Microsoft.Data.SqlClient, Version=2.0.20168.4, Culture=neutral, PublicKeyToken=23ec7fc2d6eaa4a5]](d__3 ByRef)
at Microsoft.Data.SqlClient.SNI.SNISslStream.ReadAsync(Byte[], Int32, Int32, System.Threading.CancellationToken)
at Microsoft.Data.SqlClient.SNI.SNIPacket.ReadFromStreamAsync(System.IO.Stream, Microsoft.Data.SqlClient.SNI.SNIAsyncCallback)
at Microsoft.Data.SqlClient.SNI.SNITCPHandle.ReceiveAsync(Microsoft.Data.SqlClient.SNI.SNIPacket ByRef)
at Microsoft.Data.SqlClient.SNI.SNIMarsConnection.ReceiveAsync(Microsoft.Data.SqlClient.SNI.SNIPacket ByRef)
at Microsoft.Data.SqlClient.SNI.SNIMarsConnection.StartReceive()
at Microsoft.Data.SqlClient.SNI.TdsParserStateObjectManaged.EnableMars(UInt32 ByRef)
at Microsoft.Data.SqlClient.TdsParser.EnableMars()

Hi and thanx for your new version. The Logging interface was great, but I am still missing one very nice feature from the "OLD" logging, and that is that the naming of the logger was the same as the name of the QUEUE. This was making it very easy to spot any config issues, when I could see the name of the Queue as the Name of the Logger.

Ah gotcha. I don't see a reason this cannot be the default, so I've pushed a change that does this. I've also changed the sample programs to output this; by default Serilog won't show this in the console logger, so a template has to be specified.

Don't have an ETA on a new version yet, but you could compile dotnetworkqueue.dll in release mode and drop that in I suppose.

Tested the new version with the stackoverflow issue and its still there.

Stack dump if you want to think some more over lunch. Maybe you see something

Stack overflow.

Repeat 32711 times:

at DotNetWorkQueue.TaskScheduling.SmartThreadPoolTaskScheduler.QueueTask(System.Threading.Tasks.Task)

at System.Threading.Tasks.Task.ScheduleAndStart(Boolean)
at System.Threading.Tasks.ContinueWithTaskContinuation.Run(System.Threading.Tasks.Task, Boolean) at System.Threading.Tasks.Task.ContinueWithCore(System.Threading.Tasks.Task, System.Threading.Tasks.TaskScheduler, System.Threading.CancellationToken, System.Threading.Tasks.TaskContinuationOptions) at Microsoft.Data.SqlClient.ConcurrentQueueSemaphore.WaitAsync(System.Threading.CancellationToken) at Microsoft.Data.SqlClient.SNI.SNISslStream+d__3.MoveNext() at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[Microsoft.Data.SqlClient.SNI.SNISslStream+d__3, Microsoft.Data.SqlClient, Version=2.0.20168.4, Culture=neutral, PublicKeyToken=23ec7fc2d6eaa4a5]](d__3 ByRef) at Microsoft.Data.SqlClient.SNI.SNISslStream.ReadAsync(Byte[], Int32, Int32, System.Threading.CancellationToken) at Microsoft.Data.SqlClient.SNI.SNIPacket.ReadFromStreamAsync(System.IO.Stream, Microsoft.Data.SqlClient.SNI.SNIAsyncCallback) at Microsoft.Data.SqlClient.SNI.SNITCPHandle.ReceiveAsync(Microsoft.Data.SqlClient.SNI.SNIPacket ByRef) at Microsoft.Data.SqlClient.SNI.SNIMarsConnection.ReceiveAsync(Microsoft.Data.SqlClient.SNI.SNIPacket ByRef) at Microsoft.Data.SqlClient.SNI.SNIMarsConnection.StartReceive() at Microsoft.Data.SqlClient.SNI.TdsParserStateObjectManaged.EnableMars(UInt32 ByRef) at Microsoft.Data.SqlClient.TdsParser.EnableMars()

Well, this is an odd one. I'll spend some time later seeing If I can re-create this.

Off the top of my head; if you where using dot net < 4.8 before, everything was running using a different SQL client. So maybe there is something weird going on with dot net 6.0 and Microsoft.Data.SqlClient.

I am using RedisTransport as the Backend for DotNetWorkQueue, not SQL. The SQL is a separate call that are used in a scheduled worker,
But, I just managed to create a working reproducable sample in Linux (WSL and Docker). It has something to do with
using "MultipleActiveResultSets=true;" in the connection string. Not reproducable in Windows though.
I'll test some more and see if I can pinpoint the error.

Thanx for the change to the logging BTW.

Ok. I have a solution to my problem. If I add "Microsoft.Data.SqlClient" v3.0.1 or higher (using 4.0.0) everything works. There has to be an error in the version (2.1.4) used default by EFCore 6.