scala / bug

Scala 2 bug reports only. Please, no questions — proper bug reports only.

Home Page:https://scala-lang.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Concurrency problem when using the subtype operator <:<

akamali opened this issue · comments

Using Scala 2.11.6 with Java 1.8.0_111 on Ubuntu 16.04.2 LTS.

I have the following code

import scala.reflect.runtime.universe.{Type, typeOf}

object bug {

  trait PA[+AttrType]

  def main(args: Array[String]): Unit = {
      val lock = new Object()

      0 until 100 foreach { t =>
        (0 until Runtime.getRuntime.availableProcessors() * 4).par.foreach { id =>
          val someStringProperty: Type = typeOf[PA[String]]
          val someProperty: Type = typeOf[PA[_]]

          /*lock.synchronized*/ {
            if (!(someStringProperty <:< someProperty))
              println("Is not subtype in iteration " + t)
          }
        }
      }
  }
}

I get this output when I run the above in a loop:

$ while true; do scala bug.scala; done
Is not subtype in iteration 2
Is not subtype in iteration 1
Is not subtype in iteration 0
Is not subtype in iteration 11
Is not subtype in iteration 69
Is not subtype in iteration 6
Is not subtype in iteration 90
Is not subtype in iteration 16
Is not subtype in iteration 0
Is not subtype in iteration 2
Is not subtype in iteration 0

It's expected that someStringProperty be a subtype of someProperty, and someStringProperty <:< someProperty returns true most of the time. But when used in multiple threads (on a cold JVM) it appears to sometimes returns false.

The problem seems to go away if I add a synchronized keyword around the subtype check.

Duplicate of #6240, it looks like. Scala reflection is known not to be thread-safe. see below

Note that in general when filing a bug report, we ask that you try it on a current Scala version before reporting (at present 2.12.4 or 2.13.0-M3; and the latest 2.11.x release was 2.11.12).

I was able to reproduce the problem with Scala 2.11.12, 2.12.4, and 2.13.0-M3 with Java 1.8.0_151 on Windows 10.

That moment when you realize <:< is not just a simple boolean check 😄

SethTisue both the bug #6240 and https://docs.scala-lang.org/overviews/reflection/thread-safety.html suggest that problem was fixed as of 2.11. Based on profiling, if it is really non-threadsafe code, it spends a surprising amount of time holding locks.

You’re right about 6240. But I also feel like there’s further history on this, maybe on another ticket...?

re: holding locks, https://github.com/scala/scala/pull/3386/files gives some idea of the contortions involved trying to make this stuff threadsafe

Is anyone working on this? If not, any recommendations on fixing it?

This seems like a pretty major issue, and it currently affects Apache Spark, making certain methods not thread safe.

@SethTisue I know the Scala team is trying not to do any more work on Scala 2.11, but it will be a long time before all Scala 2.11-based Spark disappears from The Internets, so I hope the Scala team will consider back-porting a fix to 2.11 and 2.12.

scala.reflect is experimental. Making it thread safe without killing performance is nearly impossible. I was against even trying.

The workaround is to synchronize access by the user, or not use it at all.

2.11 is EOL.

When you consider all this together, I don’t see how we can justify cutting another 2.11 release

(just my own thoughts, not a team statement:)

Quite apart from 2.11's EOL-ness, tinkering with scala.reflect in 2.11.x or even in 2.12.x, this late in their release cycles, seems quite risky to me.

The workaround is to synchronize access by the user

I was going to suggest this as well.

I'm not sure if the wording is clear. Just in case, I'll rephrase: Spark could, we suggest, add locks to their own code at the <:< use sites. Has this been tried?

or not use it at all

I'm curious to what extent Spark really needs TypeTag and <:<. TypeTag is quite heavyweight stuff; it has the entire Scala type system inside it. I have sometimes seen TypeTag used in the wild where it was overkill.

Perhaps TypeTag really is the right choice for Spark, but if so, I'd like to understand how and why. This understanding would be useful for the Scala team to have, going forward, even apart from the narrower question of 2.11.x changes.

@SethTisue

I'm curious to what extent Spark really needs TypeTag and <:<.

(I don't know much about Spark, but) An example usage looks like https://github.com/apache/spark/blob/v2.4.0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala#L61-L69, which uses the Scala type information to grab Spark SQL DataType.

Here's another. https://github.com/apache/spark/blob/v2.4.0/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala#L262-L271.

Generally speaking, it seems like "take this datatype written in Scala, and serialize it in some other Spark datatype"? To support things like Array, what other facility would you recommend?

I wasn't expecting a backport of a fix to 2.11 or 2.12, but IMO this behavior subverts expectations, and we should try to make scala reflection better for future versions.

I'm adding a workaround in Spark: apache/spark#24085. Still, I think this should be fixed.

There is ALWAYS a way to achieve near-identical performance in a thread-safe way (for all cases that were previously correct). For example, even doing the dumb solution of synchronizing on calls to <:< provides imperceptible slowdown in all currently correct (single-threaded) cases. I've done my own experiment, and you can verify this experimentally as well.

It's unfortunate that Spark is using runtime reflection to generate the serialization code. The long term solution would be to use typeclasses + compile-time typeclass derivation, but it might require some re-archirecturing.

Flink doesn't have this problem, because it's generating the serialization code at compile time with a macro: https://github.com/apache/flink/blob/master/flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeInformationGen.scala This is still not as ideal as using typeclasses, but much better than runtime reflection.

I guess another question to ask would be why this code needs to be generated in parallel? Would it not be good enough to generate the serialization code beforehand and then initiate the job? Or am I missing something.

Spark is intended to be fully thread safe, not only within a Spark job, but also across parallel jobs in the same application: https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application

Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if they were submitted from separate threads. By “job”, in this section, we mean a Spark action (e.g. save, collect) and any tasks that need to run to evaluate that action. Spark’s scheduler is fully thread-safe and supports this use case to enable applications that serve multiple requests (e.g. queries for multiple users).

One use case where this becomes important is for reading multiple data sources at once. Here the bottleneck is often waiting on each data source to return results, so it is best for the user to start multiple threads, one for each data source, and join all of them once each one is materialized into an RDD/DataFrame/Dataset or its side effect is complete.

I agree about type classes. It would be a better solution, but tricky to switch to now.

It's unfortunate that Spark is using runtime reflection to generate the serialization code. The long term solution would be to use typeclasses + compile-time typeclass derivation, but it might require some re-archirecturing.

The worst part is that Spark actually has an Encoder typeclass, but (1) it's hard to implement your own and (2) they don't compose at all. E.g. when I manage to define my own Encoder[MyType] and I have a case class Foo(a: Int, b: MyType) then I will still get a runtime error because Encoder[Foo] doesn't delegate to my encoder but uses a limited set of hardcoded encoders.

Ah yes, I forgot about Encoder.

To be fair it's not easy to write typeclass derivation that composes well in Scala, just take for example shapeless and kittens. But it should be possible, I've done this exercise for Flink: https://github.com/joroKr21/flink-shapeless, but haven't tried it for Spark.

Happy to see the workaround is doable! Just to be clear, we do not intend to provide (full) thread-safety for scala.reflect -- I don't think it's feasible. Closing this ticket to reflect that.

@adriaanm @SethTisue I don't understand.

  1. What's wrong with the simple solution of wrapping the <:< operator in a synchronized block internally? Isn't it much better than doing nothing?
  2. Even if this is such a difficult issue to solve, it seems like egg in the face of scala. I want scala to be a language I'm happy to work in / tell my friends about. So why close the issue?

Apparently our communication around scala.reflect being experimental hasn't been very effective. It's a layer built on top of the compiler, and you'll just have to take my word for it (or look around in the PR queue) that it's not an easy thing to make thread safe. If it was as easy as you suggest in 1., we'd have done it.

  1. 🤷‍♂️

Sorry if I misunderstand or missed something. What's wrong with 1. way? If there's a problem Spark side shouldn't take this approach too.