Akka-Cluster(2)- distributed pub/sub mechanism 分散式釋出/訂閱機制
上期我們介紹了cluster singleton,它的作用是保證在一個叢集環境裡永遠會有唯一一個singleton例項存在。具體使用方式是在叢集所有節點部署ClusterSingletonManager,由叢集中的leader節點選定其中一個節點並指示上面的ClusterSingletonManager執行一個cluster singleton例項。與singleton例項互動則通過即時構建ClusterSingletonProxy例項當作溝通目標。從應用場景來說cluster singleton應該是某種pull模式的應用:我們把singleton當作中央操作協調,比如說管理一個任務清單,多個ClusterSingletonProxy從任務清單中獲取(pull)自己應該執行的任務。如果需要實現push模式的任務派送:即由singleton主動通知叢集裡某種型別的actor執行任務,那麼通過ClusterSingletonProxy溝通就不適用了,使用pub/sub方式是一個可行的解決方案。
distributed pub/sub含兩種釋出方式:Publish/Send,分別代表群發和點對點發布模式。在叢集環境裡每個節點上akka-cluster系統都提供一個DistributedPubSubMediator例項作為該節點向外釋出及訂閱訊息的渠道。釋出者publisher只對每個節點發布一次訊息,再由每個節點上唯一的Mediator接收並轉發給節點本地所有訂閱該類訊息的subscriber。Publish是個型別:
final case class Publish(topic: String, msg: Any, sendOneMessageToEachGroup: Boolean) extends DistributedPubSubMessage { def this(topic: String, msg: Any) = this(topic, msg, sendOneMessageToEachGroup = false) } object Publish { def apply(topic: String, msg: Any) = new Publish(topic, msg) } ... }
釋出操作就是把一個Publish訊息發給本節點的Mediator:
class Publisher extends Actor { import DistributedPubSubMediator.Publish // activate the extension val mediator = DistributedPubSub(context.system).mediator def receive = { case in: String ⇒ val out = in.toUpperCase mediator ! Publish("content", out,sendOneMessageToEachGroup = true) } }
sendOneMessageToEachGroup預設=false,代表釋出訊息不會送達用group ID訂閱的subscriber,true則代表訊息不會送達沒用group ID訂閱的subscriber。同樣Subscribe也是個型別:
final case class Subscribe(topic: String, group: Option[String], ref: ActorRef) {
require(topic != null && topic != "", "topic must be defined")
/**
* Convenience constructor with `group` None
*/
def this(topic: String, ref: ActorRef) = this(topic, None, ref)
/**
* Java API: constructor with group: String
*/
def this(topic: String, group: String, ref: ActorRef) = this(topic, Some(group), ref)
}
object Subscribe {
def apply(topic: String, ref: ActorRef) = new Subscribe(topic, ref)
}
訂閱操作即向本地Mediator傳送Subscribe訊息:
val mediator = DistributedPubSub(context.system).mediator
// subscribe to the topic named "content"
mediator ! Subscribe("content", self)
def receive = {
case s: String ⇒
log.info("Got {}", s)
case SubscribeAck(Subscribe("content", None, `self`)) ⇒
log.info("subscribing")
}
mediator ! UnSubscribe("content", self)
def receive = {
...
case UnSubscribeAck ⇒
log.info("unsubscribing")
}
取消訂閱則傳送UnSubscribe訊息。
Send/Put是一種點對點模式,不需要topic作為訂閱標的。同樣:Send和Put都是訊息型別,Put代表訂閱:
val mediator = DistributedPubSub(context.system).mediator
// register to the path
mediator ! Put(self)
def receive = {
case s: String ⇒
log.info("Got {}", s)
}
}
mediator ! DistributedPubSubMediator.Remove(path)
Put在Mediator上登記了self,包括path,所以取消訂閱也就是從Mediator上取消特定path。由於是點對點模式,Send就是針對某個path傳送訊息:
final case class Send(path: String, msg: Any, localAffinity: Boolean) extends DistributedPubSubMessage {
/**
* Convenience constructor with `localAffinity` false
*/
def this(path: String, msg: Any) = this(path, msg, localAffinity = false)
}
final case class SendToAll(path: String, msg: Any, allButSelf: Boolean = false) extends DistributedPubSubMessage {
def this(path: String, msg: Any) = this(path, msg, allButSelf = false)
}
Send通過特定的路由策略從多個在不同節點上的匹配path選定一個節點發送訊息。localAffinity=true代表訊息傳送節點本地登記的匹配path actor優先。SendToAll則代表對所有登記了匹配path的節點發送訊息:
class Sender extends Actor {
import DistributedPubSubMediator.Send
// activate the extension
val mediator = DistributedPubSub(context.system).mediator
def receive = {
case in: String ⇒
val out = in.toUpperCase
mediator ! Send(path = "/user/destination", msg = out, localAffinity = true)
}
}
class SenderToAll extends Actor {
import DistributedPubSubMediator.Send
// activate the extension
val mediator = DistributedPubSub(context.system).mediator
def receive = {
case in: String ⇒
val out = in.toUpperCase
mediator ! SendToAll(path = "/user/destination", msg = out)
}
}
下面我們還是舉個例子來示範distributed pub/sub,同時示範對利用protobuf格式作為訊息型別來實現釋出/訂閱機制。忽然想起前面介紹過的MongoDBStreaming,裡面跨叢集節點的資料庫操作指令都是protobuf格式進行序列化的。在這個例子裡我們把publisher作為一個數據庫指揮,把MongoDB操作指令釋出出去,然後subscriber訂閱資料庫操作指令。收到訊息後解包成MongoDB操作指令,然後對資料庫操作。
我們首先看看在application.conf裡是如何配置訊息序列化格式的:
actor {
provider = "cluster"
serializers {
java = "akka.serialization.JavaSerializer"
proto = "akka.remote.serialization.ProtobufSerializer"
}
serialization-bindings {
"java.lang.String" = java
"scalapb.GeneratedMessage" = proto
}
}
例子裡的protobuf訊息是由scalapb從.proto檔案中自動產生的。下面我們先定義subscriber:
trait PubMessage {}
case class Gossip(msg: String) extends PubMessage
case object StopTalk
class Subscriber extends Actor with ActorLogging {
import monix.execution.Scheduler.Implicits.global
implicit val mgosys = context.system
implicit val ec = mgosys.dispatcher
val clientSettings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings {b =>
b.hosts(List(new ServerAddress("localhost:27017")).asJava)
}.build()
implicit val client: MongoClient = MongoClient(clientSettings)
val mediator = DistributedPubSub(context.system).mediator
override def preStart() = {
mediator ! Subscribe("talks", self)
mediator ! Subscribe("mongodb", self)
super.preStart()
}
override def receive: Receive = {
case Gossip(msg) =>
log.info(s"******* received message: $msg by ${self}")
case SubscribeAck(sub) =>
log.info(s"******* $self Subscribed to ${sub.topic} ...")
case UnsubscribeAck(sub) =>
log.info(s"******* $self Unsubscribed from ${sub.topic} ...")
case StopTalk =>
mediator ! Unsubscribe("talks", self)
mediator ! Unsubscribe("mongodb", self)
case someProto @ Some(proto:ProtoMGOContext) =>
val ctx = MGOContext.fromProto(proto)
log.info(s"****** received MGOContext: $someProto *********")
val task = mgoUpdate[Completed](ctx).toTask
task.runOnComplete {
case Success(s) => println("operations completed successfully.")
case Failure(exception) => println(s"error: ${exception.getMessage}")
}
case msg => log.info(s"**********received some messaged: $msg *********")
}
}
object Subscriber {
def props = Props(new Subscriber)
def create(port: Int): ActorRef = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.load())
val system = ActorSystem("PubSubSystem", config)
system.actorOf(props, s"subscriber$port")
}
}
因為subscriber需要執行MongoDB指令,所有必須定義一個客戶端:
val clientSettings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings {b =>
b.hosts(List(new ServerAddress("localhost:27017")).asJava)
}.build()
implicit val client: MongoClient = MongoClient(clientSettings)
...
case someProto @ Some(proto:ProtoMGOContext) =>
val ctx = MGOContext.fromProto(proto)
log.info(s"****** received MGOContext: $someProto *********")
val task = mgoUpdate[Completed](ctx).toTask
task.runOnComplete {
case Success(s) => println("operations completed successfully.")
case Failure(exception) => println(s"error: ${exception.getMessage}")
}
分別訂閱兩種訊息:
override def preStart() = {
mediator ! Subscribe("talks", self)
mediator ! Subscribe("mongodb", self)
super.preStart()
}
...
case StopTalk =>
mediator ! Unsubscribe("talks", self)
mediator ! Unsubscribe("mongodb", self)
publisher是這樣定義的:
class Publisher extends Actor with ActorLogging {
val mediator = DistributedPubSub(context.system).mediator
val ctx = MGOContext("testdb","friends")
override def receive: Receive = {
case Gossip(msg) =>
mediator ! Publish("talks", Gossip(msg))
log.info(s"published message: $msg")
case StopTalk =>
mediator ! Publish("talks", StopTalk)
log.info("everyone stop!")
case doc @ Document(_) =>
val c = ctx.setCommand(MGOCommands.Insert(Seq(doc)))
log.info(s"*****publishing mongo command: ${c}")
mediator ! Publish("mongodb",c.toSomeProto)
}
}
object Publisher {
def props = Props(new Publisher)
def create(port: Int): ActorRef = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${port}")
.withFallback(ConfigFactory.load())
val system = ActorSystem("PubSubSystem", config)
system.actorOf(props, "publisher")
}
}
publisher構造指令:將一個Document當作一條記錄插入到MongoDB friends表裡。指令被轉換成protobuf格式:
val ctx = MGOContext("testdb","friends")
override def receive: Receive = {
...
case doc @ Document(_) =>
val c = ctx.setCommand(MGOCommands.Insert(Seq(doc)))
log.info(s"*****publishing mongo command: ${c}")
mediator ! Publish("mongodb",c.toSomeProto)
}
下面是publisher和subscriber應用示範:
package pubsubdemo
import org.mongodb.scala._
object PubSubDemo extends App {
val publisher = Publisher.create(2551) //seed node
scala.io.StdIn.readLine()
Subscriber.create(2552)
scala.io.StdIn.readLine()
Subscriber.create(2553)
scala.io.StdIn.readLine()
publisher ! Gossip("hello everyone!")
scala.io.StdIn.readLine()
publisher ! Gossip("do you hear me ?")
scala.io.StdIn.readLine()
//MongoDB 操作示範
val peter = Document("name" -> "MAGRET KOON", "age" -> 28)
publisher ! peter
scala.io.StdIn.readLine()
publisher ! StopTalk
scala.io.StdIn.readLine()
}
值得注意的是:系統構建了兩個subscriber, 2552和2553,意味MongoDB操作指令會被重複執行兩次。不過我們的示範不在意這些細節。
下面是這次討論中的示範原始碼:
project/scalapb.sbt
addSbtPlugin("com.thesamet" % "sbt-protoc" % "0.99.18")
libraryDependencies ++= Seq(
"com.thesamet.scalapb" %% "compilerplugin" % "0.7.4"
)
build.sbt
import scalapb.compiler.Version.scalapbVersion
import scalapb.compiler.Version.grpcJavaVersion
name := "distributed-pub-sub"
version := "0.1"
scalaVersion := "2.12.7"
scalacOptions += "-Ypartial-unification"
libraryDependencies := Seq(
"com.typesafe.akka" %% "akka-actor" % "2.5.17",
"com.typesafe.akka" %% "akka-cluster-tools" % "2.5.17",
"com.thesamet.scalapb" %% "scalapb-runtime" % scalapbVersion % "protobuf",
// "io.grpc" % "grpc-netty" % grpcJavaVersion,
"com.thesamet.scalapb" %% "scalapb-runtime-grpc" % scalapbVersion,
"io.monix" %% "monix" % "2.3.0",
//for mongodb 4.0
"org.mongodb.scala" %% "mongo-scala-driver" % "2.4.0",
"com.lightbend.akka" %% "akka-stream-alpakka-mongodb" % "0.20",
//other dependencies
"co.fs2" %% "fs2-core" % "0.9.7",
"ch.qos.logback" % "logback-classic" % "1.2.3",
"org.typelevel" %% "cats-core" % "0.9.0",
"io.monix" %% "monix-execution" % "3.0.0-RC1",
"io.monix" %% "monix-eval" % "3.0.0-RC1"
)
PB.targets in Compile := Seq(
scalapb.gen() -> (sourceManaged in Compile).value
)
protobuf/sdp.proto
syntax = "proto3";
import "google/protobuf/wrappers.proto";
import "google/protobuf/any.proto";
import "scalapb/scalapb.proto";
option (scalapb.options) = {
// use a custom Scala package name
// package_name: "io.ontherocks.introgrpc.demo"
// don't append file name to package
flat_package: true
// generate one Scala file for all messages (services still get their own file)
single_file: true
// add imports to generated file
// useful when extending traits or using custom types
// import: "io.ontherocks.hellogrpc.RockingMessage"
// code to put at the top of generated file
// works only with `single_file: true`
//preamble: "sealed trait SomeSealedTrait"
};
package sdp.grpc.services;
message ProtoDate {
int32 yyyy = 1;
int32 mm = 2;
int32 dd = 3;
}
message ProtoTime {
int32 hh = 1;
int32 mm = 2;
int32 ss = 3;
int32 nnn = 4;
}
message ProtoDateTime {
ProtoDate date = 1;
ProtoTime time = 2;
}
message ProtoAny {
bytes value = 1;
}
protobuf/mgo.proto
syntax = "proto3";
import "google/protobuf/wrappers.proto";
import "google/protobuf/any.proto";
import "scalapb/scalapb.proto";
option (scalapb.options) = {
// use a custom Scala package name
// package_name: "io.ontherocks.introgrpc.demo"
// don't append file name to package
flat_package: true
// generate one Scala file for all messages (services still get their own file)
single_file: true
// add imports to generated file
// useful when extending traits or using custom types
// import: "io.ontherocks.hellogrpc.RockingMessage"
// code to put at the top of generated file
// works only with `single_file: true`
//preamble: "sealed trait SomeSealedTrait"
};
/*
* Demoes various customization options provided by ScalaPBs.
*/
package sdp.grpc.services;
import "sdp.proto";
message ProtoMGOBson {
bytes bson = 1;
}
message ProtoMGODocument {
bytes document = 1;
}
message ProtoMGOResultOption { //FindObservable
int32 optType = 1;
ProtoMGOBson bsonParam = 2;
int32 valueParam = 3;
}
message ProtoMGOAdmin{
string tarName = 1;
repeated ProtoMGOBson bsonParam = 2;
ProtoAny options = 3;
string objName = 4;
}
message ProtoMGOContext { //MGOContext
string dbName = 1;
string collName = 2;
int32 commandType = 3;
repeated ProtoMGOBson bsonParam = 4;
repeated ProtoMGOResultOption resultOptions = 5;
repeated string targets = 6;
ProtoAny options = 7;
repeated ProtoMGODocument documents = 8;
google.protobuf.BoolValue only = 9;
ProtoMGOAdmin adminOptions = 10;
}
PubSubActor.scala
package pubsubdemo
import akka.actor._
import akka.cluster.pubsub.DistributedPubSubMediator._
import akka.cluster.pubsub._
import com.typesafe.config._
import akka.actor.ActorSystem
import org.mongodb.scala._
import sdp.grpc.services.ProtoMGOContext
import sdp.mongo.engine.MGOClasses._
import sdp.mongo.engine.MGOEngine._
import sdp.result.DBOResult._
import scala.collection.JavaConverters._
import scala.util._
trait PubMessage {}
case class Gossip(msg: String) extends PubMessage
case object StopTalk
class Subscriber extends Actor with ActorLogging {
import monix.execution.Scheduler.Implicits.global
implicit val mgosys = context.system
implicit val ec = mgosys.dispatcher
val clientSettings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings {b =>
b.hosts(List(new ServerAddress("localhost:27017")).asJava)
}.build()
implicit val client: MongoClient = MongoClient(clientSettings)
val mediator = DistributedPubSub(context.system).mediator
override def preStart() = {
mediator ! Subscribe("talks", self)
mediator ! Subscribe("mongodb", self)
super.preStart()
}
override def receive: Receive = {
case Gossip(msg) =>
log.info(s"******* received message: $msg by ${self}")
case SubscribeAck(sub) =>
log.info(s"******* $self Subscribed to ${sub.topic} ...")
case UnsubscribeAck(sub) =>
log.info(s"******* $self Unsubscribed from ${sub.topic} ...")
case StopTalk =>
mediator ! Unsubscribe("talks", self)
mediator ! Unsubscribe("mongodb", self)
case someProto @ Some(proto:ProtoMGOContext) =>
val ctx = MGOContext.fromProto(proto)
log.info(s"****** received MGOContext: $someProto *********")
val task = mgoUpdate[Completed](ctx).toTask
task.runOnComplete {
case Success(s) => println("operations completed successfully.")
case Failure(exception) => println(s"error: ${exception.getMessage}")
}
case msg => log.info(s"**********received some messaged: $msg *********")
}
}
object Subscriber {
def props = Props(new Subscriber)
def create(port: Int): ActorRef = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port")
.withFallback(ConfigFactory.load())
val system = ActorSystem("PubSubSystem", config)
system.actorOf(props, s"subscriber$port")
}
}
class Publisher extends Actor with ActorLogging {
val mediator = DistributedPubSub(context.system).mediator
val ctx = MGOContext("testdb","friends")
override def receive: Receive = {
case Gossip(msg) =>
mediator ! Publish("talks", Gossip(msg))
log.info(s"published message: $msg")
case StopTalk =>
mediator ! Publish("talks", StopTalk)
log.info("everyone stop!")
case doc @ Document(_) =>
val c = ctx.setCommand(MGOCommands.Insert(Seq(doc)))
log.info(s"*****publishing mongo command: ${c}")
mediator ! Publish("mongodb",c.toSomeProto)
}
}
object Publisher {
def props = Props(new Publisher)
def create(port: Int): ActorRef = {
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${port}")
.withFallback(ConfigFactory.load())
val system = ActorSystem("PubSubSystem", config)
system.actorOf(props, "publisher")
}
}
PubSubDemo.scala
package pubsubdemo
import org.mongodb.scala._
object PubSubDemo extends App {
val publisher = Publisher.create(2551) //seed node
scala.io.StdIn.readLine()
Subscriber.create(2552)
scala.io.StdIn.readLine()
Subscriber.create(2553)
scala.io.StdIn.readLine()
publisher ! Gossip("hello everyone!")
scala.io.StdIn.readLine()
publisher ! Gossip("do you hear me ?")
scala.io.StdIn.readLine()
//MongoDB 操作示範
val peter = Document("name" -> "MAGRET KOON", "age" -> 28)
publisher ! peter
scala.io.StdIn.readLine()
publisher ! StopTalk
scala.io.StdIn.readLine()
}
MongoDBEngine.scala
package sdp.mongo.engine
import java.text.SimpleDateFormat
import java.util.Calendar
import akka.NotUsed
import akka.stream.Materializer
import akka.stream.alpakka.mongodb.scaladsl._
import akka.stream.scaladsl.{Flow, Source}
import org.bson.conversions.Bson
import org.mongodb.scala.bson.collection.immutable.Document
import org.mongodb.scala.bson.{BsonArray, BsonBinary}
import org.mongodb.scala.model._
import org.mongodb.scala.{MongoClient, _}
import protobuf.bytes.Converter._
import sdp.file.Streaming._
import sdp.logging.LogSupport
import scala.collection.JavaConverters._
import scala.concurrent._
import scala.concurrent.duration._
object MGOClasses {
type MGO_ACTION_TYPE = Int
val MGO_QUERY = 0
val MGO_UPDATE = 1
val MGO_ADMIN = 2
/* org.mongodb.scala.FindObservable
import com.mongodb.async.client.FindIterable
val resultDocType = FindIterable[Document]
val resultOption = FindObservable(resultDocType)
.maxScan(...)
.limit(...)
.sort(...)
.project(...) */
type FOD_TYPE = Int
val FOD_FIRST = 0 //def first(): SingleObservable[TResult], return the first item
val FOD_FILTER = 1 //def filter(filter: Bson): FindObservable[TResult]
val FOD_LIMIT = 2 //def limit(limit: Int): FindObservable[TResult]
val FOD_SKIP = 3 //def skip(skip: Int): FindObservable[TResult]
val FOD_PROJECTION = 4 //def projection(projection: Bson): FindObservable[TResult]
//Sets a document describing the fields to return for all matching documents
val FOD_SORT = 5 //def sort(sort: Bson): FindObservable[TResult]
val FOD_PARTIAL = 6 //def partial(partial: Boolean): FindObservable[TResult]
//Get partial results from a sharded cluster if one or more shards are unreachable (instead of throwing an error)
val FOD_CURSORTYPE = 7 //def cursorType(cursorType: CursorType): FindObservable[TResult]
//Sets the cursor type
val FOD_HINT = 8 //def hint(hint: Bson): FindObservable[TResult]
//Sets the hint for which index to use. A null value means no hint is set
val FOD_MAX = 9 //def max(max: Bson): FindObservable[TResult]
//Sets the exclusive upper bound for a specific index. A null value means no max is set
val FOD_MIN = 10 //def min(min: Bson): FindObservable[TResult]
//Sets the minimum inclusive lower bound for a specific index. A null value means no max is set
val FOD_RETURNKEY = 11 //def returnKey(returnKey: Boolean): FindObservable[TResult]
//Sets the returnKey. If true the find operation will return only the index keys in the resulting documents
val FOD_SHOWRECORDID=12 //def showRecordId(showRecordId: Boolean): FindObservable[TResult]
//Sets the showRecordId. Set to true to add a field `\$recordId` to the returned documents
case class ResultOptions(
optType: FOD_TYPE,
bson: Option[Bson] = None,
value: Int = 0 ){
def toProto = new sdp.grpc.services.ProtoMGOResultOption(
optType = this.optType,
bsonParam = this.bson.map {b => sdp.grpc.services.ProtoMGOBson(marshal(b))},
valueParam = this.value
)
def toFindObservable: FindObservable[Document] => FindObservable[Document] = find => {
optType match {
case FOD_FIRST => find
case FOD_FILTER => find.filter(bson.get)
case FOD_LIMIT => find.limit(value)
case FOD_SKIP => find.skip(value)
case FOD_PROJECTION => find.projection(bson.get)
case FOD_SORT => find.sort(bson.get)
case FOD_PARTIAL => find.partial(value != 0)
case FOD_CURSORTYPE => find
case FOD_HINT => find.hint(bson.get)
case FOD_MAX => find.max(bson.get)
case FOD_MIN => find.min(bson.get)
case FOD_RETURNKEY => find.returnKey(value != 0)
case FOD_SHOWRECORDID => find.showRecordId(value != 0)
}
}
}
object ResultOptions {
def fromProto(msg: sdp.grpc.services.ProtoMGOResultOption) = new ResultOptions(
optType = msg.optType,
bson = msg.bsonParam.map(b => unmarshal[Bson](b.bson)),
value = msg.valueParam
)
}
trait MGOCommands
object MGOCommands {
case class Count(filter: Option[Bson] = None, options: Option[Any] = None) extends MGOCommands
case class Distict(fieldName: String, filter: Option[Bson] = None) extends MGOCommands
/* org.mongodb.scala.FindObservable
import com.mongodb.async.client.FindIterable
val resultDocType = FindIterable[Document]
val resultOption = FindObservable(resultDocType)
.maxScan(...)
.limit(...)
.sort(...)
.project(...) */
case class Find(filter: Option[Bson] = None,
andThen: Seq[ResultOptions] = Seq.empty[ResultOptions],
firstOnly: Boolean = false) extends MGOCommands
case class Aggregate(pipeLine: Seq[Bson]) extends MGOCommands
case class MapReduce(mapFunction: String, reduceFunction: String) extends MGOCommands
case class Insert(newdocs: Seq[Document], options: Option[Any] = None) extends MGOCommands
case class Delete(filter: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands
case class Replace(filter: Bson, replacement: Document, options: Option[Any] = None) extends MGOCommands
case class Update(filter: Bson, update: Bson, options: Option[Any] = None, onlyOne: Boolean = false) extends MGOCommands
case class BulkWrite(commands: List[WriteModel[Document]], options: Option[Any] = None) extends MGOCommands
}
object MGOAdmins {
case class DropCollection(collName: String) extends MGOCommands
case class CreateCollection(collName: String, options: Option[Any] = None) extends MGOCommands
case class ListCollection(dbName: String) extends MGOCommands
case class CreateView(viewName: String, viewOn: String, pipeline: Seq[Bson], options: Option[Any] = None) extends MGOCommands
case class CreateIndex(key: Bson, options: Option[Any] = None) extends MGOCommands
case class DropIndexByName(indexName: String, options: Option[Any] = None) extends MGOCommands
case class DropIndexByKey(key: Bson, options: Option[Any] = None) extends MGOCommands
case class DropAllIndexes(options: Option[Any] = None) extends MGOCommands
}
case class MGOContext(
dbName: String,
collName: String,
actionType: MGO_ACTION_TYPE = MGO_QUERY,
action: Option[MGOCommands] = None,
actionOptions: Option[Any] = None,
actionTargets: Seq[String] = Nil
) {
ctx =>
def setDbName(name: String): MGOContext = ctx.copy(dbName = name)
def setCollName(name: String): MGOContext = ctx.copy(collName = name)
def setActionType(at: MGO_ACTION_TYPE): MGOContext = ctx.copy(actionType = at)
def setCommand(cmd: MGOCommands): MGOContext = ctx.copy(action = Some(cmd))
def toSomeProto = MGOProtoConversion.ctxToProto(this)
}
object MGOContext {
def apply(db: String, coll: String) = new MGOContext(db, coll)
def fromProto(proto: sdp.grpc.services.ProtoMGOContext): MGOContext =
MGOProtoConversion.ctxFromProto(proto)
}
case class MGOBatContext(contexts: Seq[MGOContext], tx: Boolean = false) {
ctxs =>
def setTx(txopt: Boolean): MGOBatContext = ctxs.copy(tx = txopt)
def appendContext(ctx: MGOContext): MGOBatContext =
ctxs.copy(contexts = contexts :+ ctx)
}
object MGOBatContext {
def apply(ctxs: Seq[MGOContext], tx: Boolean = false) = new MGOBatContext(ctxs,tx)
}
type MGODate = java.util.Date
def mgoDate(yyyy: Int, mm: Int, dd: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd)
ca.getTime()
}
def mgoDateTime(yyyy: Int, mm: Int, dd: Int, hr: Int, min: Int, sec: Int): MGODate = {
val ca = Calendar.getInstance()
ca.set(yyyy,mm,dd,hr,min,sec)
ca.getTime()
}
def mgoDateTimeNow: MGODate = {
val ca = Calendar.getInstance()
ca.getTime
}
def mgoDateToString(dt: MGODate, formatString: String): String = {
val fmt= new SimpleDateFormat(formatString)
fmt.format(dt)
}
type MGOBlob = BsonBinary
type MGOArray = BsonArray
def fileToMGOBlob(fileName: String, timeOut: FiniteDuration = 60 seconds)(
implicit mat: Materializer) = FileToByteArray(fileName,timeOut)
def mgoBlobToFile(blob: MGOBlob, fileName: String)(
implicit mat: Materializer) = ByteArrayToFile(blob.getData,fileName)
def mgoGetStringOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getString(fieldName))
else None
}
def mgoGetIntOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getInteger(fieldName))
else None
}
def mgoGetLonggOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getLong(fieldName))
else None
}
def mgoGetDoubleOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDouble(fieldName))
else None
}
def mgoGetBoolOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getBoolean(fieldName))
else None
}
def mgoGetDateOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
Some(doc.getDate(fieldName))
else None
}
def mgoGetBlobOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOBlob]]
else None
}
def mgoGetArrayOrNone(doc: Document, fieldName: String) = {
if (doc.keySet.contains(fieldName))
doc.get(fieldName).asInstanceOf[Option[MGOArray]]
else None
}
def mgoArrayToDocumentList(arr: MGOArray): scala.collection.immutable.List[org.bson.BsonDocument] = {
(arr.getValues.asScala.toList)
.asInstanceOf[scala.collection.immutable.List[org.bson.BsonDocument]]
}
type MGOFilterResult = FindObservable[Document] => FindObservable[Document]
}
object MGOEngine extends LogSupport {
import MGOClasses._
import MGOAdmins._
import MGOCommands._
import sdp.result.DBOResult._
object TxUpdateMode {
private def mgoTxUpdate(ctxs: MGOBatContext, observable: SingleObservable[ClientSession])(
implicit client: MongoClient, ec: ExecutionContext): SingleObservable[ClientSession] = {
log.info(s"mgoTxUpdate> calling ...")
observable.map(clientSession => {
val transactionOptions =
TransactionOptions.builder()
.readConcern(ReadConcern.SNAPSHOT)
.writeConcern(WriteConcern.MAJORITY).build()
clientSession.startTransaction(transactionOptions)
/*
val fut = Future.traverse(ctxs.contexts) { ctx =>
mgoUpdateObservable[Completed](ctx).map(identity).toFuture()
}
Await.ready(fut, 3 seconds) */
ctxs.contexts.foreach { ctx =>
mgoUpdateObservable[Completed](ctx).map(identity).toFuture()
}
clientSession
})
}
private def commitAndRetry(observable: SingleObservable[Completed]): SingleObservable[Completed] = {
log.info(s"commitAndRetry> calling ...")
observable.recoverWith({
case e: MongoException if e.hasErrorLabel(MongoException.UNKNOWN_TRANSACTION_COMMIT_RESULT_LABEL) => {
log.warn("commitAndRetry> UnknownTransactionCommitResult, retrying commit operation ...")
commitAndRetry(observable)
}
case e: Exception => {
log.error(s"commitAndRetry> Exception during commit ...: $e")
throw e
}
})
}
private def runTransactionAndRetry(observable: SingleObservable[Completed]): SingleObservable[Completed] = {
log.info(s"runTransactionAndRetry> calling ...")
observable.recoverWith({
case e: MongoException if e.hasErrorLabel(MongoException.TRANSIENT_TRANSACTION_ERROR_LABEL) => {
log.warn("runTransactionAndRetry> TransientTransactionError, aborting transaction and retrying ...")
runTransactionAndRetry(observable)
}
})
}
def mgoTxBatch(ctxs: MGOBatContext)(
implicit client: MongoClient, ec: ExecutionContext): DBOResult[Completed] = {
log.info(s"mgoTxBatch> MGOBatContext: ${ctxs}")
val updateObservable: Observable[ClientSession] = mgoTxUpdate(ctxs, client.startSession())
val commitTransactionObservable: SingleObservable[Completed] =
updateObservable.flatMap(clientSession => clientSession.commitTransaction())
val commitAndRetryObservable: SingleObservable[Completed] = commitAndRetry(commitTransactionObservable)
runTransactionAndRetry(commitAndRetryObservable)
valueToDBOResult(Completed())
}
}
def mgoUpdateBatch(ctxs: MGOBatContext)(implicit client: MongoClient, ec: ExecutionContext): DBOResult[Completed] = {
log.info(s"mgoUpdateBatch> MGOBatContext: ${ctxs}")
if (ctxs.tx) {
TxUpdateMode.mgoTxBatch(ctxs)
} else {
/*
val fut = Future.traverse(ctxs.contexts) { ctx =>
mgoUpdate[Completed](ctx).map(identity) }
Await.ready(fut, 3 seconds)
Future.successful(new Completed) */
ctxs.contexts.foreach { ctx =>
mgoUpdate[Completed](ctx).map(identity) }
valueToDBOResult(Completed())
}
}
def mongoStream(ctx: MGOContext)(
implicit client: MongoClient, ec: ExecutionContextExecutor): Source[Document, NotUsed] = {
log.info(s"mongoStream> MGOContext: ${ctx}")
def toResultOption(rts: Seq[ResultOptions]): FindObservable[Document] => FindObservable[Document] = findObj =>
rts.foldRight(findObj)((a,b) => a.toFindObservable(b))
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
if ( ctx.action == None) {
log.error(s"mongoStream> uery action cannot be null!")
throw new IllegalArgumentException("query action cannot be null!")
}
try {
ctx.action.get match {
case Find(None, Nil, false) => //FindObservable
MongoSource(coll.find())
case Find(None, Nil, true) => //FindObservable
MongoSource(coll.find().first())
case Find(Some(filter), Nil, false) => //FindObservable
MongoSource(coll.find(filter))
case Find(Some(filter), Nil, true) => //FindObservable
MongoSource(coll.find(filter).first())
case Find(None, sro, _) => //FindObservable
val next = toResultOption(sro)
MongoSource(next(coll.find[Document]()))
case Find(Some(filter), sro, _) => //FindObservable
val next = toResultOption(sro)
MongoSource(next(coll.find[Document](filter)))
case _ =>
log.error(s"mongoStream> unsupported streaming query [${ctx.action.get}]")
throw new RuntimeException(s"mongoStream> unsupported streaming query [${ctx.action.get}]")
}
}
catch { case e: Exception =>
log.error(s"mongoStream> runtime error: ${e.getMessage}")
throw new RuntimeException(s"mongoStream> Error: ${e.getMessage}")
}
}
// T => FindIterable e.g List[Document]
def mgoQuery[T](ctx: MGOContext, Converter: Option[Document => Any] = None)(implicit client: MongoClient): DBOResult[T] = {
log.info(s"mgoQuery> MGOContext: ${ctx}")
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
def toResultOption(rts: Seq[ResultOptions]): FindObservable[Document] => FindObservable[Document] = findObj =>
rts.foldRight(findObj)((a,b) => a.toFindObservable(b))
if ( ctx.action == None) {
log.error(s"mgoQuery> uery action cannot be null!")
Left(new IllegalArgumentException("query action cannot be null!"))
}
try {
ctx.action.get match {
/* count */
case Count(Some(filter), Some(opt)) => //SingleObservable
coll.countDocuments(filter, opt.asInstanceOf[CountOptions])
.toFuture().asInstanceOf[Future[T]]
case Count(Some(filter), None) => //SingleObservable
coll.countDocuments(filter).toFuture()
.asInstanceOf[Future[T]]
case Count(None, None) => //SingleObservable
coll.countDocuments().toFuture()
.asInstanceOf[Future[T]]
/* distinct */
case Distict(field, Some(filter)) => //DistinctObservable
coll.distinct(field, filter).toFuture()
.asInstanceOf[Future[T]]
case Distict(field, None) => //DistinctObservable
coll.distinct((field)).toFuture()
.asInstanceOf[Future[T]]
/* find */
case Find(None, Nil, false) => //FindObservable
if (Converter == None) coll.find().toFuture().asInstanceOf[Future[T]]
else coll.find().map(Converter.get).toFuture().asInstanceOf[Future[T]]
case Find(None, Nil, true) => //FindObservable
if (Converter == None) coll.find().first().head().asInstanceOf[Future[T]]
else coll.find().first().map(Converter.get).head().asInstanceOf[Future[T]]
case Find(Some(filter), Nil, false) => //FindObservable
if (Converter == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
else coll.find(filter).map(Converter.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), Nil, true) => //FindObservable
if (Converter == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
else coll.find(filter).first().map(Converter.get).head().asInstanceOf[Future[T]]
case Find(None, sro, _) => //FindObservable
val next = toResultOption(sro)
if (Converter == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document]()).map(Converter.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), sro, _) => //FindObservable
val next = toResultOption(sro)
if (Converter == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document](filter)).map(Converter.get).toFuture().asInstanceOf[Future[T]]
/* aggregate AggregateObservable*/
case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
/* mapReduce MapReduceObservable*/
case MapReduce(mf, rf) => coll.mapReduce(mf, rf).toFuture().asInstanceOf[Future[T]]
/* list collection */
case ListCollection(dbName) => //ListConllectionObservable
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
}
}
catch { case e: Exception =>
log.error(s"mgoQuery> runtime error: ${e.getMessage}")
Left(new RuntimeException(s"mgoQuery> Error: ${e.getMessage}"))
}
}
//T => Completed, result.UpdateResult, result.DeleteResult
def mgoUpdate[T](ctx: MGOContext)(implicit client: MongoClient): DBOResult[T] =
try {
mgoUpdateObservable[T](ctx).toFuture()
}
catch { case e: Exception =>
log.error(s"mgoUpdate> runtime error: ${e.getMessage}")
Left(new RuntimeException(s"mgoUpdate> Error: ${e.getMessage}"))
}
def mgoUpdateObservable[T](ctx: MGOContext)(implicit client: MongoClient): SingleObservable[T] = {
log.info(s"mgoUpdateObservable> MGOContext: ${ctx}")
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
if ( ctx.action == None) {
log.error(s"mgoUpdateObservable> uery action cannot be null!")
throw new IllegalArgumentException("mgoUpdateObservable> query action cannot be null!")
}
try {
ctx.action.get match {
/* insert */
case Insert(docs, Some(opt)) => //SingleObservable[Completed]
if (docs.size > 1)
coll.insertMany(docs, opt.asInstanceOf[InsertManyOptions]).asInstanceOf[SingleObservable[T]]
else coll.insertOne(docs.head, opt.asInstanceOf[InsertOneOptions]).asInstanceOf[SingleObservable[T]]
case Insert(docs, None) => //SingleObservable
if (docs.size > 1) coll.insertMany(docs).asInstanceOf[SingleObservable[T]]
else coll.insertOne(docs.head).asInstanceOf[SingleObservable[T]]
/* delete */
case Delete(filter, None, onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter).asInstanceOf[SingleObservable[T]]
else coll.deleteMany(filter).asInstanceOf[SingleObservable[T]]
case Delete(filter, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter, opt.asInstanceOf[DeleteOptions]).asInstanceOf[SingleObservable[T]]
else coll.deleteMany(filter, opt.asInstanceOf[DeleteOptions]).asInstanceOf[SingleObservable[T]]
/* replace */
case Replace(filter, replacement, None) => //SingleObservable
coll.replaceOne(filter, replacement).asInstanceOf[SingleObservable[T]]
case Replace(filter, replacement, Some(opt)) => //SingleObservable
coll.replaceOne(filter, replacement, opt.asInstanceOf[ReplaceOptions]).asInstanceOf[SingleObservable[T]]
/* update */
case Update(filter, update, None, onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update).asInstanceOf[SingleObservable[T]]
else coll.updateMany(filter, update).asInstanceOf[SingleObservable[T]]
case Update(filter, update, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update, opt.asInstanceOf[UpdateOptions]).asInstanceOf[SingleObservable[T]]
else coll.updateMany(filter, update, opt.asInstanceOf[UpdateOptions]).asInstanceOf[SingleObservable[T]]
/* bulkWrite */
case BulkWrite(commands, None) => //SingleObservable
coll.bulkWrite(commands).asInstanceOf[SingleObservable[T]]
case BulkWrite(commands, Some(opt)) => //SingleObservable
coll.bulkWrite(commands, opt.asInstanceOf[BulkWriteOptions]).asInstanceOf[SingleObservable[T]]
}
}
catch { case e: Exception =>
log.error(s"mgoUpdateObservable> runtime error: ${e.getMessage}")
throw new RuntimeException(s"mgoUpdateObservable> Error: ${e.getMessage}")
}
}
def mgoAdmin(ctx: MGOContext)(implicit client: MongoClient): DBOResult[Completed] = {
log.info(s"mgoAdmin> MGOContext: ${ctx}")
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
if ( ctx.action == None) {
log.error(s"mgoAdmin> uery action cannot be null!")
Left(new IllegalArgumentException("mgoAdmin> query action cannot be null!"))
}
try {
ctx.action.get match {
/* drop collection */
case DropCollection(collName) => //SingleObservable
val coll = db.getCollection(collName)
coll.drop().toFuture()
/* create collection */
case CreateCollection(collName, None) => //SingleObservable
db.createCollection(collName).toFuture()
case CreateCollection(collName, Some(opt)) => //SingleObservable
db.createCollection(collName, opt.asInstanceOf[CreateCollectionOptions]).toFuture()
/* list collection
case ListCollection(dbName) => //ListConllectionObservable
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
*/
/* create view */
case CreateView(viewName, viewOn, pline, None) => //SingleObservable
db.createView(viewName, viewOn, pline).toFuture()
case CreateView(viewName, viewOn, pline, Some(opt)) => //SingleObservable
db.createView(viewName, viewOn, pline, opt.asInstanceOf[CreateViewOptions]).toFuture()
/* create index */
case CreateIndex(key, None) => //SingleObservable
coll.createIndex(key).toFuture().asInstanceOf[Future[Completed]] // asInstanceOf[SingleObservable[Completed]]
case CreateIndex(key, Some(opt)) => //SingleObservable
coll.createIndex(key, opt.asInstanceOf[IndexOptions]).asInstanceOf[Future[Completed]] // asInstanceOf[SingleObservable[Completed]]
/* drop index */
case DropIndexByName(indexName, None) => //SingleObservable
coll.dropIndex(indexName).toFuture()
case DropIndexByName(indexName, Some(opt)) => //SingleObservable
coll.dropIndex(indexName, opt.asInstanceOf[DropIndexOptions]).toFuture()
case DropIndexByKey(key, None) => //SingleObservable
coll.dropIndex(key).toFuture()
case DropIndexByKey(key, Some(opt)) => //SingleObservable
coll.dropIndex(key, opt.asInstanceOf[DropIndexOptions]).toFuture()
case DropAllIndexes(None) => //SingleObservable
coll.dropIndexes().toFuture()
case DropAllIndexes(Some(opt)) => //SingleObservable
coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture()
}
}
catch { case e: Exception =>
log.error(s"mgoAdmin> runtime error: ${e.getMessage}")
throw new RuntimeException(s"mgoAdmin> Error: ${e.getMessage}")
}
}
/*
def mgoExecute[T](ctx: MGOContext)(implicit client: MongoClient): Future[T] = {
val db = client.getDatabase(ctx.dbName)
val coll = db.getCollection(ctx.collName)
ctx.action match {
/* count */
case Count(Some(filter), Some(opt)) => //SingleObservable
coll.countDocuments(filter, opt.asInstanceOf[CountOptions])
.toFuture().asInstanceOf[Future[T]]
case Count(Some(filter), None) => //SingleObservable
coll.countDocuments(filter).toFuture()
.asInstanceOf[Future[T]]
case Count(None, None) => //SingleObservable
coll.countDocuments().toFuture()
.asInstanceOf[Future[T]]
/* distinct */
case Distict(field, Some(filter)) => //DistinctObservable
coll.distinct(field, filter).toFuture()
.asInstanceOf[Future[T]]
case Distict(field, None) => //DistinctObservable
coll.distinct((field)).toFuture()
.asInstanceOf[Future[T]]
/* find */
case Find(None, None, optConv, false) => //FindObservable
if (optConv == None) coll.find().toFuture().asInstanceOf[Future[T]]
else coll.find().map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(None, None, optConv, true) => //FindObservable
if (optConv == None) coll.find().first().head().asInstanceOf[Future[T]]
else coll.find().first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(Some(filter), None, optConv, false) => //FindObservable
if (optConv == None) coll.find(filter).toFuture().asInstanceOf[Future[T]]
else coll.find(filter).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), None, optConv, true) => //FindObservable
if (optConv == None) coll.find(filter).first().head().asInstanceOf[Future[T]]
else coll.find(filter).first().map(optConv.get).head().asInstanceOf[Future[T]]
case Find(None, Some(next), optConv, _) => //FindObservable
if (optConv == None) next(coll.find[Document]()).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document]()).map(optConv.get).toFuture().asInstanceOf[Future[T]]
case Find(Some(filter), Some(next), optConv, _) => //FindObservable
if (optConv == None) next(coll.find[Document](filter)).toFuture().asInstanceOf[Future[T]]
else next(coll.find[Document](filter)).map(optConv.get).toFuture().asInstanceOf[Future[T]]
/* aggregate AggregateObservable*/
case Aggregate(pline) => coll.aggregate(pline).toFuture().asInstanceOf[Future[T]]
/* mapReduce MapReduceObservable*/
case MapReduce(mf, rf) => coll.mapReduce(mf, rf).toFuture().asInstanceOf[Future[T]]
/* insert */
case Insert(docs, Some(opt)) => //SingleObservable[Completed]
if (docs.size > 1) coll.insertMany(docs, opt.asInstanceOf[InsertManyOptions]).toFuture()
.asInstanceOf[Future[T]]
else coll.insertOne(docs.head, opt.asInstanceOf[InsertOneOptions]).toFuture()
.asInstanceOf[Future[T]]
case Insert(docs, None) => //SingleObservable
if (docs.size > 1) coll.insertMany(docs).toFuture().asInstanceOf[Future[T]]
else coll.insertOne(docs.head).toFuture().asInstanceOf[Future[T]]
/* delete */
case Delete(filter, None, onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter).toFuture().asInstanceOf[Future[T]]
case Delete(filter, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.deleteOne(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
else coll.deleteMany(filter, opt.asInstanceOf[DeleteOptions]).toFuture().asInstanceOf[Future[T]]
/* replace */
case Replace(filter, replacement, None) => //SingleObservable
coll.replaceOne(filter, replacement).toFuture().asInstanceOf[Future[T]]
case Replace(filter, replacement, Some(opt)) => //SingleObservable
coll.replaceOne(filter, replacement, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* update */
case Update(filter, update, None, onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter, update).toFuture().asInstanceOf[Future[T]]
case Update(filter, update, Some(opt), onlyOne) => //SingleObservable
if (onlyOne) coll.updateOne(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
else coll.updateMany(filter, update, opt.asInstanceOf[UpdateOptions]).toFuture().asInstanceOf[Future[T]]
/* bulkWrite */
case BulkWrite(commands, None) => //SingleObservable
coll.bulkWrite(commands).toFuture().asInstanceOf[Future[T]]
case BulkWrite(commands, Some(opt)) => //SingleObservable
coll.bulkWrite(commands, opt.asInstanceOf[BulkWriteOptions]).toFuture().asInstanceOf[Future[T]]
/* drop collection */
case DropCollection(collName) => //SingleObservable
val coll = db.getCollection(collName)
coll.drop().toFuture().asInstanceOf[Future[T]]
/* create collection */
case CreateCollection(collName, None) => //SingleObservable
db.createCollection(collName).toFuture().asInstanceOf[Future[T]]
case CreateCollection(collName, Some(opt)) => //SingleObservable
db.createCollection(collName, opt.asInstanceOf[CreateCollectionOptions]).toFuture().asInstanceOf[Future[T]]
/* list collection */
case ListCollection(dbName) => //ListConllectionObservable
client.getDatabase(dbName).listCollections().toFuture().asInstanceOf[Future[T]]
/* create view */
case CreateView(viewName, viewOn, pline, None) => //SingleObservable
db.createView(viewName, viewOn, pline).toFuture().asInstanceOf[Future[T]]
case CreateView(viewName, viewOn, pline, Some(opt)) => //SingleObservable
db.createView(viewName, viewOn, pline, opt.asInstanceOf[CreateViewOptions]).toFuture().asInstanceOf[Future[T]]
/* create index */
case CreateIndex(key, None) => //SingleObservable
coll.createIndex(key).toFuture().asInstanceOf[Future[T]]
case CreateIndex(key, Some(opt)) => //SingleObservable
coll.createIndex(key, opt.asInstanceOf[IndexOptions]).toFuture().asInstanceOf[Future[T]]
/* drop index */
case DropIndexByName(indexName, None) => //SingleObservable
coll.dropIndex(indexName).toFuture().asInstanceOf[Future[T]]
case DropIndexByName(indexName, Some(opt)) => //SingleObservable
coll.dropIndex(indexName, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key, None) => //SingleObservable
coll.dropIndex(key).toFuture().asInstanceOf[Future[T]]
case DropIndexByKey(key, Some(opt)) => //SingleObservable
coll.dropIndex(key, opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(None) => //SingleObservable
coll.dropIndexes().toFuture().asInstanceOf[Future[T]]
case DropAllIndexes(Some(opt)) => //SingleObservable
coll.dropIndexes(opt.asInstanceOf[DropIndexOptions]).toFuture().asInstanceOf[Future[T]]
}
}
*/
}
object MongoActionStream {
import MGOClasses._
case class StreamingInsert[A](dbName: String,
collName: String,
converter: A => Document,
parallelism: Int = 1
) extends MGOCommands
case class StreamingDelete[A](dbName: String,
collName: String,
toFilter: A => Bson,
parallelism: Int = 1,
justOne: Boolean = false
) extends MGOCommands
case class StreamingUpdate[A](dbName: String,
collName: String,
toFilter: A => Bson,
toUpdate: A => Bson,
parallelism: Int = 1,
justOne: Boolean = false
) extends MGOCommands
case class InsertAction[A](ctx: StreamingInsert[A])(
implicit mongoClient: MongoClient) {
val database = mongoClient.getDatabase(ctx.dbName)
val collection = database.getCollection(ctx.collName)
def performOnRow(implicit ec: ExecutionContext): Flow[A, Document, NotUsed] =
Flow[A].map(ctx.converter)
.mapAsync(ctx.parallelism)(doc => collection.insertOne(doc).toFuture().map(_ => doc))
}
case class UpdateAction[A](ctx: StreamingUpdate[A])(
implicit mongoClient: MongoClient) {
val database = mongoClient.getDatabase(ctx.dbName)
val collection = database.getCollection(ctx.collName)
def performOnRow(implicit ec: ExecutionContext): Flow[A, A, NotUsed] =
if (ctx.justOne) {
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.updateOne(ctx.toFilter(a), ctx.toUpdate(a)).toFuture().map(_ => a))
} else
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.updateMany(ctx.toFilter(a), ctx.toUpdate(a)).toFuture().map(_ => a))
}
case class DeleteAction[A](ctx: StreamingDelete[A])(
implicit mongoClient: MongoClient) {
val database = mongoClient.getDatabase(ctx.dbName)
val collection = database.getCollection(ctx.collName)
def performOnRow(implicit ec: ExecutionContext): Flow[A, A, NotUsed] =
if (ctx.justOne) {
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.deleteOne(ctx.toFilter(a)).toFuture().map(_ => a))
} else
Flow[A]
.mapAsync(ctx.parallelism)(a =>
collection.deleteMany(ctx.toFilter(a)).toFuture().map(_ => a))
}
}
object MGOHelpers {
implicit class DocumentObservable[C](val observable: Observable[Document]) extends ImplicitObservable[Document] {
override val converter: (Document) => String = (doc) => doc.toJson
}
implicit class GenericObservable[C](val observable: Observable[C]) extends ImplicitObservable[C] {
override val converter: (C) => String = (doc) => doc.toString
}
trait ImplicitObservable[C] {
val observable: Observable[C]
val converter: (C) => String
def results(): Seq[C] = Await.result(observable.toFuture(), 10 seconds)
def headResult() = Await.result(observable.head(), 10 seconds)
def printResults(initial: String = ""): Unit = {
if (initial.length > 0) print(initial)
results().foreach(res => println(converter(res)))
}
def printHeadResult(initial: String = ""): Unit = println(s"${initial}${converter(headResult())}")
}
def getResult[T](fut: Future[T], timeOut: Duration = 1 second): T = {
Await.result(fut, timeOut)
}
def getResults[T](fut: Future[Iterable[T]], timeOut: Duration = 1 second): Iterable[T] = {
Await.result(fut, timeOut)
}
import monix.eval.Task
import monix.execution.Scheduler.Implicits.global
final class FutureToTask[A](x: => Future[A]) {
def asTask: Task[A] = Task.deferFuture[A](x)
}
final class TaskToFuture[A](x: => Task[A]) {
def asFuture: Future[A] = x.runAsync
}
}
MGOProtoConversion.scala
package sdp.mongo.engine
import org.mongodb.scala.bson.collection.immutable.Document
import org.bson.conversions.Bson
import sdp.grpc.services._
import protobuf.bytes.Converter._
import MGOClasses._
import MGOAdmins._
import MGOCommands._
import org.bson.BsonDocument
import org.bson.codecs.configuration.CodecRegistry
import org.mongodb.scala.bson.codecs.DEFAULT_CODEC_REGISTRY
import org.mongodb.scala.FindObservable
object MGOProtoConversion {
type MGO_COMMAND_TYPE = Int
val MGO_COMMAND_FIND = 0
val MGO_COMMAND_COUNT = 20
val MGO_COMMAND_DISTICT = 21
val MGO_COMMAND_DOCUMENTSTREAM = 1
val MGO_COMMAND_AGGREGATE = 2
val MGO_COMMAND_INSERT = 3
val MGO_COMMAND_DELETE = 4
val MGO_COMMAND_REPLACE = 5
val MGO_COMMAND_UPDATE = 6
val MGO_ADMIN_DROPCOLLECTION = 8
val MGO_ADMIN_CREATECOLLECTION = 9
val MGO_ADMIN_LISTCOLLECTION = 10
val MGO_ADMIN_CREATEVIEW = 11
val MGO_ADMIN_CREATEINDEX = 12
val MGO_ADMIN_DROPINDEXBYNAME = 13
val MGO_ADMIN_DROPINDEXBYKEY = 14
val MGO_ADMIN_DROPALLINDEXES = 15
case class AdminContext(
tarName: String = "",
bsonParam: Seq[Bson] = Nil,
options: Option[Any] = None,
objName: String = ""
){
def toProto = sdp.grpc.services.ProtoMGOAdmin(
tarName = this.tarName,
bsonParam = this.bsonParam.map {b => sdp.grpc.services.ProtoMGOBson(marshal(b))},
objName = this.objName,
options = this.options.map(b => ProtoAny(marshal(b)))
)
}
object AdminContext {
def fromProto(msg: sdp.grpc.services.ProtoMGOAdmin) = new AdminContext(
tarName = msg.tarName,
bsonParam = msg.bsonParam.map(b => unmarshal[Bson](b.bson)),
objName = msg.objName,
options = msg.options.map(b => unmarshal[Any](b.value))
)
}
case class Context(
dbName: String = "",
collName: String = "",
commandType: MGO_COMMAND_TYPE,
bsonParam: Seq[Bson] = Nil,
resultOptions: Seq[ResultOptions] = Nil,
options: Option[Any] = None,
documents: Seq[Document] = Nil,
targets: Seq[String] = Nil,
only: Boolean = false,
adminOptions: Option[AdminContext] = None
){
def toProto = new sdp.grpc.services.ProtoMGOContext(
dbName = this.dbName,
collName = this.collName,
commandType = this.commandType,
bsonParam = this.bsonParam.map(bsonToProto),
resultOptions = this.resultOptions.map(_.toProto),
options = { if(this.options == None)
None //Some(ProtoAny(com.google.protobuf.ByteString.EMPTY))
else
Some(ProtoAny(marshal(this.options.get))) },
documents = this.documents.map(d => sdp.grpc.services.ProtoMGODocument(marshal(d))),
targets = this.targets,
only = Some(this.only),
adminOptions = this.adminOptions.map(_.toProto)
)
}
object MGODocument {
def fromProto(msg: sdp.grpc.services.ProtoMGODocument): Document =
unmarshal[Document](msg.document)
def toProto(doc: Document): sdp.grpc.services.ProtoMGODocument =
new ProtoMGODocument(marshal(doc))
}
object MGOProtoMsg {
def fromProto(msg: sdp.grpc.services.ProtoMGOContext) = new Context(
dbName = msg.dbName,
collName = msg.collName,
commandType = msg.commandType,
bsonParam = msg.bsonParam.map(protoToBson),
resultOptions = msg.resultOptions.map(r => ResultOptions.fromProto(r)),
options = msg.options.map(a => unmarshal[Any](a.value)),
documents = msg.documents.map(doc => unmarshal[Document](doc.document)),
targets = msg.targets,
adminOptions = msg.adminOptions.map(ado => AdminContext.fromProto(ado))
)
}
def bsonToProto(bson: Bson) =
ProtoMGOBson(marshal(bson.toBsonDocument(
classOf[org.mongodb.scala.bson.collection.immutable.Document],DEFAULT_CODEC_REGISTRY)))
def protoToBson(proto: ProtoMGOBson): Bson = new Bson {
val bsdoc = unmarshal[BsonDocument](proto.bson)
override def toBsonDocument[TDocument](documentClass: Class[TDocument], codecRegistry: CodecRegistry): BsonDocument = bsdoc
}
def ctxFromProto(proto: ProtoMGOContext): MGOContext = proto.commandType match {
case MGO_COMMAND_FIND => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Find())
)
def toResultOption(rts: Seq[ProtoMGOResultOption]): FindObservable[Document] => FindObservable[Document] = findObj =>
rts.foldRight(findObj)((a,b) => ResultOptions.fromProto(a).toFindObservable(b))
(proto.bsonParam, proto.resultOptions, proto.only) match {
case (Nil, Nil, None) => ctx
case (Nil, Nil, Some(b)) => ctx.setCommand(Find(firstOnly = b))
case (bp,Nil,None) => ctx.setCommand(
Find(filter = Some(protoToBson(bp.head))))
case (bp,Nil,Some(b)) => ctx.setCommand(
Find(filter = Some(protoToBson(bp.head)), firstOnly = b))
case (bp,fo,None) => {
ctx.setCommand(
Find(filter = Some(protoToBson(bp.head)),
andThen = fo.map(ResultOptions.fromProto)
))
}
case (bp,fo,Some(b)) => {
ctx.setCommand(
Find(filter = Some(protoToBson(bp.head)),
andThen = fo.map(ResultOptions.fromProto),
firstOnly = b))
}
case _ => ctx
}
}
case MGO_COMMAND_COUNT => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Count())
)
(proto.bsonParam, proto.options) match {
case (Nil, None) => ctx
case (bp, None) => ctx.setCommand(
Count(filter = Some(protoToBson(bp.head)))
)
case (Nil,Some(o)) => ctx.setCommand(
Count(options = Some(unmarshal[Any](o.value)))
)
case _ => ctx
}
}
case MGO_COMMAND_DISTICT => {
var ctx = new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Distict(fieldName = proto.targets.head))
)
(proto.bsonParam) match {
case Nil => ctx
case bp: Seq[ProtoMGOBson] => ctx.setCommand(
Distict(fieldName = proto.targets.head,filter = Some(protoToBson(bp.head)))
)
case _ => ctx
}
}
case MGO_COMMAND_AGGREGATE => {
new MGOContext(
dbName = proto.dbName,
collName = proto.collName,
actionType = MGO_QUERY,
action = Some(Aggregate(proto.bsonParam.map(p => protoToBson(p))))
)
}