1. 程式人生 > >Kafka的Replica分配策略之二 Replica變為0了怎麼辦

Kafka的Replica分配策略之二 Replica變為0了怎麼辦

這一篇文章準備討論當kafka叢集的broker發生變化,諸如broker崩潰,退出時,kafka叢集會如何分配該broker上的Replica和Partition.

在討論這個問題之前,需要先搞清kafka叢集中,leader與follower的分工.可以看我寫的這篇文章 Kafka的leader選舉過程

在之前介紹kafka的選舉過程時,提到成功選舉出的leader會向zookeeper註冊各種監視其中

replicaStateMachine.registerListeners()    //"/brokers/ids" 重點,監視所有的follower的加入,離開叢集的行為
這一句註冊了對/brokers/ids的監視,跟進這句命令
private def 
registerBrokerChangeListener() = { zkUtils.zkClient.subscribeChildChanges(ZkUtils.BrokerIdsPath, brokerChangeListener) }
來到了這裡,ZkUtils.BrokerIdsPath就是/brokers/ids這個路徑,那由此可以得知,重點就在brokerChangeListener上.這個listener被定義在kafka.controller包的ReplicaStateMachine.scala下.
class BrokerChangeListener() extends 
IZkChildListener with Logging { this.logIdent = "[BrokerChangeListener on Controller " + controller.config.brokerId + "]: " def handleChildChange(parentPath : String, currentBrokerList : java.util.List[String]) { info("Broker change listener fired for path %s with children %s".format(parentPath,
currentBrokerList.sorted.mkString(","))) inLock(controllerContext.controllerLock) { if (hasStarted.get) { ControllerStats.leaderElectionTimer.time { try { val curBrokers = currentBrokerList.map(_.toInt).toSet.flatMap(zkUtils.getBrokerInfo) val curBrokerIds = curBrokers.map(_.id) val liveOrShuttingDownBrokerIds = controllerContext.liveOrShuttingDownBrokerIds val newBrokerIds = curBrokerIds -- liveOrShuttingDownBrokerIds val deadBrokerIds = liveOrShuttingDownBrokerIds -- curBrokerIds val newBrokers = curBrokers.filter(broker => newBrokerIds(broker.id)) //上面幾句很好理解,篩選出新加入的broker,與退出的broker controllerContext.liveBrokers = curBrokers val newBrokerIdsSorted = newBrokerIds.toSeq.sorted val deadBrokerIdsSorted = deadBrokerIds.toSeq.sorted val liveBrokerIdsSorted = curBrokerIds.toSeq.sorted info("Newly added brokers: %s, deleted brokers: %s, all live brokers: %s" .format(newBrokerIdsSorted.mkString(","), deadBrokerIdsSorted.mkString(","), liveBrokerIdsSorted.mkString(","))) newBrokers.foreach(controllerContext.controllerChannelManager.addBroker) deadBrokerIds.foreach(controllerContext.controllerChannelManager.removeBroker) //上面兩句維護存有broker資訊的map if(newBrokerIds.nonEmpty) controller.onBrokerStartup(newBrokerIdsSorted) if(deadBrokerIds.nonEmpty) //這一句是重點,如何處理failbroker controller.onBrokerFailure(deadBrokerIdsSorted) } catch { case e: Throwable => error("Error while handling broker changes", e) } } } } } }

可以看到,針對broker變化這一情況.Kafka controller從znode節點的變化,推測出了新加入與新離開的節點.對於離開的節點呼叫了onBrokerFailure函式.

繼續跟進這裡只截取了部分onBrokerFailure的原始碼,一段一段來分析.

def onBrokerFailure(deadBrokers: Seq[Int]){
.....
....
val deadBrokersSet = deadBrokers.toSet
// trigger OfflinePartition state for all partitions whose current leader is one amongst the dead brokers
//篩選出deadbroker中所有擔任partition leaderbroker
val partitionsWithoutLeader = controllerContext.partitionLeadershipInfo.filter(partitionAndLeader =>
  deadBrokersSet.contains(partitionAndLeader._2.leaderAndIsr.leader) &&
    !deleteTopicManager.isTopicQueuedUpForDeletion(partitionAndLeader._1.topic)).keySet
//對於這些leader掛掉的partition,進行處理
partitionStateMachine.handleStateChanges(partitionsWithoutLeader, OfflinePartition)
我們看到這一段中,提取出了所有擔任partition leader的broker,這些broker下線了,是要做特殊處理的.也就是handleStateChanges函式,跟進這個函式
def handleStateChanges(partitions: Set[TopicAndPartition], targetState: PartitionState,
leaderSelector: PartitionLeaderSelector = noOpPartitionLeaderSelector,
callbacks: Callbacks = (new CallbackBuilder).build) {
....
....
case OfflinePartition => // pre: partition should be in New or Online stateassertValidPreviousStates(topicAndPartition, List(NewPartition, OnlinePartition, OfflinePartition), OfflinePartition) // should be called when the leader for a partition is no longer alivestateChangeLogger.trace("Controller %d epoch %d changed partition %s state from %s to %s".format(controllerId, controller.epoch, topicAndPartition, currState, targetState)) partitionState.put(topicAndPartition, OfflinePartition) 這個函式中,針對offline的情況僅僅是在partitionState這個HashMap中,將這個partion的狀態值為offline了.其他沒有任何改變,如果你也看了原始碼,會發現之後會有sendRequestToBrokers的呼叫.但是在之前的處理中,既沒有改變metaInfo,也沒有改變leaderShip所以沒有什麼可以通知follower broker的,這個sendRequestToBrokers()可以說什麼都沒有做.

接下來回到OnBrokerFailure()上,看看接下來做了什麼?

def onBrokerFailure(deadBrokers: Seq[Int]) {
...
...
    partitionStateMachine.triggerOnlinePartitionStateChange()

呼叫了triggerOnlinePartitionStateChange()函式
def triggerOnlinePartitionStateChange() {
  try {
    brokerRequestBatch.newBatch()
    // try to move all partitions in NewPartition or OfflinePartition state to OnlinePartition state except partitions
    // that belong to topics to be deleted
for((topicAndPartition, partitionState) <- partitionState
if !controller.deleteTopicManager.isTopicQueuedUpForDeletion(topicAndPartition.topic)) {
      if(partitionState.equals(OfflinePartition) || partitionState.equals(NewPartition))
        //這裡把offlinenew的情況篩選出來了,為什麼不直接作為引數傳入呢?感覺怪怪的.
        //針對broker down的情況,裡邊的核心其實就是這一句partitionState.put(topicAndPartition, OfflinePartition)
        //但是這裡的這個target Position變成了OnlinePartition!!!!!!
handleStateChange(topicAndPartition.topic, topicAndPartition.partition, OnlinePartition, controller.offlinePartitionSelector,
(new CallbackBuilder).build)
    }
    brokerRequestBatch.sendRequestsToBrokers(controller.epoch)
有沒有看到和之前的handleStateChanges很相似,沒錯,這兩個handleStateChange呼叫的是同一個函式.但是,引數有變化了,看到那個OnlinePatition了嘛?之前在hashmap中,將該topic標記為下線,現在重新處理將這個topic上線.
def handleStateChanges(partitions: Set[TopicAndPartition], targetState: PartitionState,
leaderSelector: PartitionLeaderSelector = noOpPartitionLeaderSelector,
callbacks: Callbacks = (new CallbackBuilder).build) {
....
....
case OnlinePartition => assertValidPreviousStates(topicAndPartition, List(NewPartition, OnlinePartition, OfflinePartition), OnlinePartition) partitionState(topicAndPartition) match { case NewPartition => // initialize leader and isr path for new partitioninitializeLeaderAndIsrForPartition(topicAndPartition) case OfflinePartition => electLeaderForPartition(topic, partition, leaderSelector) case OnlinePartition => // invoked when the leader needs to be re-electedelectLeaderForPartition(topic, partition, leaderSelector) case _ => // should never come here since illegal previous states are checked above} 由於之前的改變,現在我們進入了OfflinePartion分支,看到了令人激動的函式有沒有?就是electLeaderForPartion函式,重新選則partition的leader.如果覺得這個選舉函式太長就可以只關注兩句,就是我在上邊寫註釋,重點的那兩句.
class OfflinePartitionLeaderSelector(controllerContext: ControllerContext, config: KafkaConfig)
  extends PartitionLeaderSelector with Logging {
  this.logIdent = "[OfflinePartitionLeaderSelector]: "
def selectLeader(topicAndPartition: TopicAndPartition, currentLeaderAndIsr: LeaderAndIsr): (LeaderAndIsr, Seq[Int]) = {
    controllerContext.partitionReplicaAssignment.get(topicAndPartition) match {
      case Some(assignedReplicas) =>
        val liveAssignedReplicas = assignedReplicas.filter(r => controllerContext.liveBrokerIds.contains(r))
        val liveBrokersInIsr = currentLeaderAndIsr.isr.filter(r => controllerContext.liveBrokerIds.contains(r))
        val currentLeaderEpoch = currentLeaderAndIsr.leaderEpoch
        val currentLeaderIsrZkPathVersion = currentLeaderAndIsr.zkVersion
        val newLeaderAndIsr =
          if (liveBrokersInIsr.isEmpty) {
            // Prior to electing an unclean (i.e. non-ISR) leader, ensure that doing so is not disallowed by the configuration
            // for unclean leader election.
if (!LogConfig.fromProps(config.originals, AdminUtils.fetchEntityConfig(controllerContext.zkUtils,
ConfigType.Topic, topicAndPartition.topic)).uncleanLeaderElectionEnable) {
              throw new NoReplicaOnlineException(("No broker in ISR for partition " +
                "%s is alive. Live brokers are: [%s],".format(topicAndPartition, controllerContext.liveBrokerIds)) +
                " ISR brokers are: [%s]".format(currentLeaderAndIsr.isr.mkString(",")))
            }
            debug("No broker in ISR is alive for %s. Pick the leader from the alive assigned replicas: %s"
.format(topicAndPartition, liveAssignedReplicas.mkString(",")))
            if (liveAssignedReplicas.isEmpty) {
              throw new NoReplicaOnlineException(("No replica for partition " +
                "%s is alive. Live brokers are: [%s],".format(topicAndPartition, controllerContext.liveBrokerIds)) +
                " Assigned replicas are: [%s]".format(assignedReplicas))
            } else {
              ControllerStats.uncleanLeaderElectionRate.mark()
              //重點就是這麼簡單,liveAssignedReplicas.head
val newLeader = liveAssignedReplicas.head
              warn("No broker in ISR is alive for %s. Elect leader %d from live brokers %s. There's potential data loss."
.format(topicAndPartition, newLeader, liveAssignedReplicas.mkString(",")))
              new LeaderAndIsr(newLeader, currentLeaderEpoch + 1, List(newLeader), currentLeaderIsrZkPathVersion + 1)
            }
          } else {
            val liveReplicasInIsr = liveAssignedReplicas.filter(r => liveBrokersInIsr.contains(r))
            //重點就是這麼簡單,liveReplicasInIsr.head
val newLeader = liveReplicasInIsr.head
            debug("Some broker in ISR is alive for %s. Select %d from ISR %s to be the leader."
.format(topicAndPartition, newLeader, liveBrokersInIsr.mkString(",")))
            new LeaderAndIsr(newLeader, currentLeaderEpoch + 1, liveBrokersInIsr.toList, currentLeaderIsrZkPathVersion + 1)
          }
        info("Selected new leader and ISR %s for offline partition %s".format(newLeaderAndIsr.toString(), topicAndPartition))
        (newLeaderAndIsr, liveAssignedReplicas)
      case None =>
        throw new NoReplicaOnlineException("Partition %s doesn't have replicas assigned to it".format(topicAndPartition))
    }
  }
}
重新選舉的邏輯很簡單,有沒有同步的ISR?有的話,從ISR列表裡拿一個出來.如果沒有同步的ISR,那有沒有不同步但是仍然執行的replica,有的話從存活repilica列表裡拿一個出來.同時發出警告:"there is potential data loss"->"有資料丟失的可能".連活的replica都沒有?那麼只能throw exception了.

OK,那麼現在partition的leader選舉完了,該處理replica的關係了.看一下kafka原始碼的結構,可以知道每個leader controller維護了partitionStateMachine以及ReplicaStateMachine兩種狀態機.這可以看作是面向物件的抽象,把每個partition與replica都抽象成一個類.上面這些分析都是在partitionStateMachine裡進行的.

回到一開始的OnBrokerFailure()

def onBrokerFailure(deadBrokers: Seq[Int]) {
....
....
var allReplicasOnDeadBrokers = controllerContext.replicasOnBrokers(deadBrokersSet)
val
activeReplicasOnDeadBrokers = allReplicasOnDeadBrokers.filterNot(p => deleteTopicManager.isTopicQueuedUpForDeletion(p.topic))// handle dead replicas
replicaStateMachine
.handleStateChanges(activeReplicasOnDeadBrokers, OfflineReplica) 如我剛才所講,現在開始處理受影響的replica的狀態機,target state是OfflineReplica,來到相應的case分支
case OfflineReplica =>
  assertValidPreviousStates(partitionAndReplica,
List(NewReplica, OnlineReplica, OfflineReplica, ReplicaDeletionIneligible), targetState)
  // send stop replica command to the replica so that it stops fetching from the leader
brokerRequestBatch.addStopReplicaRequestForBrokers(List(replicaId), topic, partition, deletePartition = false)
  // As an optimization, the controller removes dead replicas from the ISR
val leaderAndIsrIsEmpty: Boolean =
    controllerContext.partitionLeadershipInfo.get(topicAndPartition) match {
      case Some(currLeaderIsrAndControllerEpoch) =>
        //重點,rempveReplicaFromIsr
controller.removeReplicaFromIsr(topic, partition, replicaId) match {
          case Some(updatedLeaderIsrAndControllerEpoch) =>
            // send the shrunk ISR state change request to all the remaining alive replicas of the partition.
val currentAssignedReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition)
            if (!controller.deleteTopicManager.isPartitionToBeDeleted(topicAndPartition)) {
              brokerRequestBatch.addLeaderAndIsrRequestForBrokers(currentAssignedReplicas.filterNot(_ == replicaId),
topic, partition, updatedLeaderIsrAndControllerEpoch, replicaAssignment)
            }
            replicaState.put(partitionAndReplica, OfflineReplica)
            stateChangeLogger.trace("Controller %d epoch %d changed state of replica %d for partition %s from %s to %s"
.format(controllerId, controller.epoch, replicaId, topicAndPartition, currState, targetState))
            false
          case None =>
            true
}
      case None =>
        true
}
一開始會先通知,受影響的follower broker,現在你們的leader變了,不要再向這臺broker拉資料以維持ISR了.也就是下面這句
brokerRequestBatch.addStopReplicaRequestForBrokers(List(replicaId), topic, partition, deletePartition = false)
然後主要的處理邏輯來到了removeReplicaFromISR這邊,
def removeReplicaFromIsr(topic: String, partition: Int, replicaId: Int): Option[LeaderIsrAndControllerEpoch] = {
......
......
if (leaderAndIsr.isr.contains(replicaId)) {
          // if the replica to be removed from the ISR is also the leader, set the new leader value to -1
val newLeader = if (replicaId == leaderAndIsr.leader) LeaderAndIsr.NoLeader else leaderAndIsr.leader
          var newIsr = leaderAndIsr.isr.filter(b => b != replicaId)

          // if the replica to be removed from the ISR is the last surviving member of the ISR and unclean leader election
          // is disallowed for the corresponding topic, then we must preserve the ISR membership so that the replica can
          // eventually be restored as the leader.
if (newIsr.isEmpty && !LogConfig.fromProps(config.originals, AdminUtils.fetchEntityConfig(zkUtils,
ConfigType.Topic, topicAndPartition.topic)).uncleanLeaderElectionEnable) {
            info("Retaining last ISR %d of partition %s since unclean leader election is disabled".format(replicaId, topicAndPartition))
            newIsr = leaderAndIsr.isr
          }

          val newLeaderAndIsr = new LeaderAndIsr(newLeader, leaderAndIsr.leaderEpoch + 1,
newIsr, leaderAndIsr.zkVersion + 1)
          // update the new leadership decision in zookeeper or retry
val (updateSucceeded, newVersion) = ReplicationUtils.updateLeaderAndIsr(zkUtils, topic, partition,
newLeaderAndIsr, epoch, leaderAndIsr.zkVersion)

          newLeaderAndIsr.zkVersion = newVersion
          finalLeaderIsrAndControllerEpoch = Some(LeaderIsrAndControllerEpoch(newLeaderAndIsr, epoch))
          controllerContext.partitionLeadershipInfo.put(topicAndPartition, finalLeaderIsrAndControllerEpoch.get)
          if (updateSucceeded)
            info("New leader and ISR for partition %s is %s".format(topicAndPartition, newLeaderAndIsr.toString()))
          updateSucceeded
        }
......
......
}
刪除了一些非重點的程式碼,看到重點了嗎?這個Repica會記錄下自己的leader,要是leader為空了就標記為noleader=-1.其他的就是向zookeeper同步資料,更新版本號(這個partition leader的epoch)

好了到這裡,一臺broker down了的基本主要邏輯就講完了.

那麼現在問題來了,如果一個ISR中的Replica變為0了那麼kafka好像並不會有什麼額外的作為.

看到這裡,我其實是很震驚的.於是整個上午我都在原始碼中尋找,當replica變為0時,kafka的該怎麼處理.但是並沒有找到kafka會新分配一臺機器當作partition leader的程式碼.

既然實踐是檢驗真理的唯一標準,那麼不如實際測試一下,看看kafka在這種情況下會怎麼表現?實驗環境是一個有5臺broker的kafka叢集

建立一個名為testSource的topic,partition為2,replica為1.之所以用replica為1,是為了更快地模擬出replica為0的情況.

[email protected]:~/yf/deploying/kafka-configs$ kafka-topics.sh --zookeeper 10.255.0.12:2181 --describe  --topic testSource
Topic:testSource	PartitionCount:2	ReplicationFactor:1	Configs:
	Topic: testSource	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
	Topic: testSource	Partition: 1	Leader: 2	Replicas: 2	Isr: 2
那麼現在kill 掉 broker 2上的程序.
[email protected]:~/yf/deploying/kafka-configs$ kafka-topics.sh --zookeeper 10.255.0.12:2181 --describe  --topic testSource
Topic:testSource	PartitionCount:2	ReplicationFactor:1	Configs:
	Topic: testSource	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
	Topic: testSource	Partition: 1	Leader: -1	Replicas: 2	Isr: 

看到broker2,掛了之後leader如預期地變為了-1,就像剛才提到的,leader為空就標記為noleader=-1,Isr副本數變為了0.

這個時候能否提供服務?在控制檯上分別開一個producer與一個consumer

[email protected]:~/yf/deploying/kafka-configs$ kafka-console-producer.sh --broker-list 10.255.0.12:9092 --topic testSource
sdsd
sdsd
sdsd
aaa
[email protected]:~$ /opt/kafka0.10/bin/kafka-console-consumer.sh --bootstrap-server master:9092  --topic testSource
sdsd
sdsd
sdsd
aaa

OK,我們看到即使有一個partition掛了,kafka還是可以正常運轉.

現在我們kill掉 broker1,也就是剩下的唯一broker

[email protected]:~/yf/deploying/kafka-configs$ kafka-topics.sh --zookeeper 10.255.0.12:2181 --describe  --topic testSource
Topic:testSource	PartitionCount:2	ReplicationFactor:1	Configs:
	Topic: testSource	Partition: 0	Leader: -1	Replicas: 1	Isr: 
	Topic: testSource	Partition: 1	Leader: -1	Replicas: 2	Isr: 
那麼兩個partition的leader都變為了-1,這時候kafka的這個topic還會正常工作嗎?
[email protected]:~/yf/deploying/kafka-configs$ kafka-console-producer.sh --broker-list 10.255.0.12:9092 --topic testSource
dsd
fdfd
gfgfg
sdsd[2017-11-20 14:41:40,532] ERROR Error when sending message to topic testSource with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for testSource-0 due to 1504 ms has passed since batch creation plus linger time
[2017-11-20 14:41:40,536] ERROR Error when sending message to topic testSource with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for testSource-0 due to 1504 ms has passed since batch creation plus linger time

/opt/kafka0.10/bin/kafka-console-consumer.sh --bootstrap-server master:9092  --topic testSource
[2017-11-20 14:45:52,852] WARN Auto offset commit failed for group console-consumer-33901: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
^CProcessed a total of 0 messages
rua, 我們現在看到這個topic, 真的摸了!

也就是這個topic不再提供服務,現在我們再把兩臺broker開啟

[email protected]:~kafka-topics.sh --zookeeper 10.255.0.12:2181 --describe  --topic testSource
Topic:testSource	PartitionCount:2	ReplicationFactor:1	Configs:
	Topic: testSource	Partition: 0	Leader: 1	Replicas: 1	Isr: 1
	Topic: testSource	Partition: 1	Leader: 2	Replicas: 2	Isr: 2
恢復的broker自動加入了kafka叢集,基於改topic的kafka服務也恢復了.

現在進入總結環節:

雖然難以置信,不過kafka的可靠性並沒有我想象中的那麼高.當一個topic建立完畢後,那麼為這個topic服務的broker也確定下來了.(不包括人為地通過合法的命令增加partition和replica的情況,這裡只考慮在server down/crash的情況).如果一個topic的某個partition的ISR全部掛了,那麼這個partition就會停止服務,但是客戶端仍然可以通過這個topic接收或者傳送資料.如果某個topic的全部partition都掛了,那麼不好意思,整個topic就不再提供服務了.Producer和Consumer要麼阻塞要麼報錯,直到指定的broker恢復為止.這裡其實有一個trick,kafka叢集識別broker僅僅通過brokerId,只要配置了相同的brokerId那麼即便不是同一臺物理機或Ip,也可以頂替之前broker的服務.

還記得一開始想要解決的問題嗎?

2.要是一個broker down了,那它的replica該怎麼重新分配.

3.如果一個broker因為2成為了一個topic的新replica,那麼他沒有之前的那些message該怎麼辦?需要從其他broker拉過來嗎,如果要拉,那麼資料量太大的會不會對網路造成負載?

現在應該也有了答案,在kafka的預設配置下:

2.從這個partition的replica裡依次取broker作為新ISR的leader,直到ISR為空,該partition停止服務.

3.不會有broker會因為其他伺服器的崩潰而成為新的broker

相關推薦

Kafka的Replica分配策略 Replica變為0怎麼辦

這一篇文章準備討論當kafka叢集的broker發生變化,諸如broker崩潰,退出時,kafka叢集會如何分配該broker上的Replica和Partition. 在討論這個問題之前,需要先搞清kafka叢集中,leader與follower的分工.可以看我寫的這篇文

垃圾收集器與內存分配策略:垃圾收集器

開啟 full gc 行處理 意義 方案 發現 特征 sea 互聯網 五、垃圾收集器 如果說收集算法是內存回收的方法論,那麽垃圾收集器就是內存回收的具體實現。由於java虛擬機規範對垃圾收集器實現沒有任何的規範因此不同的廠商,不同的版本的虛擬機所提供的垃圾收集器都有可

深入理解jvm虛擬機讀書筆記-垃圾收集器與內存分配策略

具體實現 地方 比例 並發 解決 垃圾收集 替換 map 而是 垃圾收集算法-標記清除算法 標記清除算法是最基礎的收集算法。算法分為“標記”和“清楚”兩個階段:首先標記出所有需要回收的對象,在標記過程完成後統一回收所有被標記的對象。後續的收集算法都是基於這種思路對其不足進行

Linux7 下Hadoop叢集使用者管理方案 CDH5.9.0版本安裝配置

前期準備: 第一步:裝NTP 這個單獨做了一個文件專門寫NTP. 我擦。搞死了。一上午都在搞這個ntp。。。終於搞定了。 第二步:改Hosts檔案 echo '10.10.106.156   edu-bigdata-01.novalocal' > /etc/hos

2. 垃圾收集器與記憶體分配策略

上一篇我們已經討論了記憶體回收的內容,我們再來說一下,物件的記憶體分配策略往大方向講,就是在堆上分配,物件主要分配在新生代的Eden區上,如果啟用了本地執行緒分配緩衝,將按執行緒優先在TLAB上分配。普

JVM·垃圾收集器與內存分配策略對象是否可被回收!

pri 計數 isalive 第一次 lis 不同的 protect live() null 1、判斷對象已經死去/不再被引用。 1.1、引用計數算法:給對象添加引用計數器,有個地方引用就+1,引用失效就-1。任何時刻,引用為0,即判斷對象死亡。

JVM·垃圾收集器與內存分配策略垃圾回收算法!

策略 com span 特定 指令 -s roo reg jit 1、垃圾回收算法 1.1、標記-清除算法(Mark-Sweep): 過程分為“標記”和“清除”兩個過程。先將所有需要回收的目標

虛擬機器學習:垃圾收集器和記憶體分配策略

1.物件是否可回收 1.1引用計數演算法 引用計數演算法:給物件中新增一個引用計數器,每當有一個地方引用它時,計數器值就加1;當引用失效時,計數器值就減1;任何時候計數器值為0的物件就是不可能再被使用的物件。 客觀來說,引用計數演算法的實現簡單,判定效率高,在大部分情況下都是

23種設計模式策略模式)

而不是 部分 nts 設定 算法 策略 bstr 算法族 none 策略模式:(分別封裝行為接口,實現算法族,超類裏放行為接口對象,在子類裏具體設定行為對象) 原則:   分離變化部分,封裝接口,基於接口編程各種功能。此模式讓行為算法的變化獨立於算法的使用者。    舉

垃圾收集器與內存分配策略 (深入理解JVM

nali noclass eth 清理 full gc 原因 商業 jit編譯器 代碼 1.概述 垃圾收集(Garbage Collection,GC). 當需要排查各種內存溢出、內存泄露問題時,當垃圾收集成為系統達到更高並發量的瓶頸時,我們就需要對這些&ldquo

Spark2.0 特征提取、轉換、選擇:特征選擇、文本處理,以中文自然語言處理(情感分類)為例

true 方便 linear value taf 文檔 ota ati inter 特征選擇 RFormula RFormula是一個很方便,也很強大的Feature選擇(自由組合的)工具。 輸入string 進行獨熱編碼(見下面例子country) 輸入數值型轉換為dou

MySQL 8.0.11 innodb cluster 運維管理手冊--集群搭建

ODB ant security 支持 hostname limit trie not wall MySQL 8.0.11 innodb cluster 高可用集群部署運維管理手冊之二 集群建設 作者 方連超 基礎環境 系統:centos 7.5Mysql:8.0.11

JVM理論:(/1)內存分配策略

本地線程 準備 最大 機會 bubuko 空間不夠 嘗試 它的 日誌分析   Java技術體系中所提倡的自動內存管理最終可以歸結為自動化地解決兩個問題:給對象分配內存以及回收分配給對象的內存。 對象的分配可能有以下幾種方式: 1、JIT編譯後被拆散為標量類型並間接地棧上分配

深入理解Java虛擬機器總結一垃圾收集器與記憶體分配策略()

深入理解Java虛擬機器總結一垃圾收集器與記憶體分配策略(二) 垃圾回收概述 如何判定物件為垃圾物件 垃圾回收演算法 垃圾收集器詳解 記憶體分配策略 垃圾回收概述 如何判定物件為垃圾物件 引用計數法: 在物件

Hibernate Validator 6.0.7.Final (詳解Validator和ConstraintViolation)

Validator介面在bean的校驗中扮演非常重要的角色。本文將詳細講解該介面。 獲取該介面的方法 ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); validator =

Hibernate Validator 6.0.7.Final 之一(宣告Bean約束約束的繼承性

承接Hibernate Validator 6.0.7.Final 之一(宣告Bean約束之約束的繼承性之一)。本文使用field級別的約束來測試一下約束的繼承性。 父類 public class Car { @NotNull(message = "製造商

CMMI v2.0 同行評審

今早一金融行業企業CMMI啟動會上,發起人(公司領導) 對如何提升交付質量時,便提到過去專案大部分過程中的 缺陷大部分都是後期,例如系統測試時發現, 很多需求 / 設計 階段的缺陷未在本階段被發現。 他希望大家加強前期的評審, 如需求 、 設計,降低早期引發缺陷,導致後期大量返工。 我估計這

深入Java虛擬機器閱讀感()-Java垃圾回收器與記憶體分配策略

垃圾回收器主要演算法:       1、引用計數法。給物件新增一個計數器,當物件被使用時則加1,當引用失效時則減1,當計數為0時則認為該物件可以被回收。由於該算演算法無法解決物件相互引用而計數不會減為0,導致該物件無法回收,所以該演算法不是Java虛擬垃圾回收器

Qt隱式共享及記憶體分配策略

一、隱式共享簡介 (來源《Qt5開及發例項》第三版) 隱式共享又稱回寫複製(copy no write)。當兩個物件共享一部分資料時(通過淺拷貝實現資料塊共享),如果資料不變,則不進行資料的複製。而當某個物件需要改變資料時,則執行深拷貝。 程式在處理共享物件時,使用深拷

深入理解JVM讀書筆記:垃圾收集器與記憶體分配策略

一、判斷物件死亡的兩種常用演算法:                在堆裡面存放著java世界中幾乎所有的例項物件,垃圾收集器在堆進行回收前,第一件事情就是要確定哪些物件還存活著,哪些已經死去。 1、引