SecondaryNameNode啟動異常埠被佔用
阿新 • • 發佈:2019-01-25
2017-01-12 01:27:04,313 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-12 01:27:04,824 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-01-12 01:27:04,896 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-01-12 01:27:04,896 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: SecondaryNameNode metrics system started
2017-01-12 01:27:05,189 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/hdfs/namesecondary/in_use.lock acquired by nodename [email protected]
2017-01-12 01:27:05,196 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2017-01-12 01:27:05,196 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2017-01-12 01:27:05,264 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2017-01-12 01:27:05,264 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2017-01-12 01:27:05,266 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2017-01-12 01:27:05,268 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2017 Jan 12 01:27:05
2017-01-12 01:27:05,270 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2017-01-12 01:27:05,270 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-01-12 01:27:05,272 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2017-01-12 01:27:05,272 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2017-01-12 01:27:05,284 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2017-01-12 01:27:05,289 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2017-01-12 01:27:05,495 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2017-01-12 01:27:05,496 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-01-12 01:27:05,496 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2017-01-12 01:27:05,496 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2017-01-12 01:27:05,497 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2017-01-12 01:27:05,498 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2017-01-12 01:27:05,498 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2017-01-12 01:27:05,501 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2017-01-12 01:27:05,501 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2017-01-12 01:27:05,501 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2017-01-12 01:27:05,513 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for secondary at: http://hd2:50090
2017-01-12 01:27:05,579 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-01-12 01:27:05,592 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-01-12 01:27:05,602 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.secondary is not defined
2017-01-12 01:27:05,613 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-01-12 01:27:05,617 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context secondary
2017-01-12 01:27:05,617 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-01-12 01:27:05,618 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-01-12 01:27:05,646 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hd2:50090
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 4 more
2017-01-12 01:27:05,663 FATAL org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Failed to start secondary namenode
java.net.BindException: Port in use: hd2:50090
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
該埠50090被佔用
root許可權下
netstat -anp | grep 50090
kill掉該程序
2017-01-12 01:27:04,824 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-01-12 01:27:04,896 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-01-12 01:27:04,896 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: SecondaryNameNode metrics system started
2017-01-12 01:27:05,189 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/hdfs/namesecondary/in_use.lock acquired by nodename
2017-01-12 01:27:05,196 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2017-01-12 01:27:05,196 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2017-01-12 01:27:05,264 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2017-01-12 01:27:05,264 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2017-01-12 01:27:05,266 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2017-01-12 01:27:05,268 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2017 Jan 12 01:27:05
2017-01-12 01:27:05,270 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2017-01-12 01:27:05,270 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-01-12 01:27:05,272 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2017-01-12 01:27:05,272 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2017-01-12 01:27:05,284 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2017-01-12 01:27:05,285 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2017-01-12 01:27:05,287 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2017-01-12 01:27:05,289 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2017-01-12 01:27:05,486 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2017-01-12 01:27:05,488 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2017-01-12 01:27:05,495 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2017-01-12 01:27:05,496 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2017-01-12 01:27:05,496 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2017-01-12 01:27:05,496 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2017-01-12 01:27:05,497 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2017-01-12 01:27:05,498 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2017-01-12 01:27:05,498 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2017-01-12 01:27:05,501 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2017-01-12 01:27:05,501 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2017-01-12 01:27:05,501 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2017-01-12 01:27:05,513 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for secondary at: http://hd2:50090
2017-01-12 01:27:05,579 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-01-12 01:27:05,592 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-01-12 01:27:05,602 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.secondary is not defined
2017-01-12 01:27:05,613 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-01-12 01:27:05,617 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context secondary
2017-01-12 01:27:05,617 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-01-12 01:27:05,618 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-01-12 01:27:05,646 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: hd2:50090
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 4 more
2017-01-12 01:27:05,663 FATAL org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Failed to start secondary namenode
java.net.BindException: Port in use: hd2:50090
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
該埠50090被佔用
root許可權下
netstat -anp | grep 50090
kill掉該程序