If you encounter an error similar to:
This is typically encountered in version 1.3 or older* and is likely due to a corruption of the shard coordinator data, probably due to ending up with multiple independent clusters. You can fix this by following these Akka instructions on 'Removal of Internal Cluster Sharding Data' (additional pointers below**). After cleaning this up you will also need to be sure to never let multiple clusters write to the same keyspace again. However there are a number of possible reasons for this to have occurred. A common scenario of what might cause this would be using auto downing instead of Lightbend's Split Brain Resolver, so that would be the first place to check. And if not that then you may need to open a support case providing us with information on how you are forming your cluster.a.c.s.PersistentShardCoordinator - Exception in receiveRecover when replaying event type [akka.cluster.sharding.ShardCoordinator$Internal$ShardRegionRegistered] with sequence number [39] for persistenceId [/sharding/ACTORID].
java.lang.IllegalArgumentException: requirement failed: Region Actor[akka.tcp://AKKA_ADDRESS] already registered…
* Lagom 1.4 introduced support for the improved "Cluster Sharding data" Akka added in recent versions. If you plan to upgrade to Lagom 1.4 soon you could opt out of "Cluster Sharding coordinator data" and into "Cluster Sharding distributed data" (also known as ddata
). This switch requires a cluster shutdown, but cleaning up corrupted "Cluster Sharding coordinator data". See more details on the migration on the migration docs
** Note that in order to run the command that will remove "Cluster Sharding coordinator data" you have to run the command from a location that will be able to connect to the database where the "Cluster Sharding coordinator data" is persisted. As mentioned on the already linked docs it is recommended to use the same classpath and config files when running the command.