netty中epoll server和nio server的使用
netty中epoll server和nio server的使用
這幾天有空研究了下netty中的EpollEventLoopGroup和NioEventLoopGroup的用法,在編碼上沒有顯著的不同,對應的epoll,有一套的api供於使用,但是因為只能在linux機上使用,因此又藉助了docker執行linux容器來執行相應程式,這節就來具體的講述下。
nio server
編寫了一個簡單的Hello world的http server,不講述詳細程式碼了,只講下最後的server中的部分原始碼,我採用的netty的版本是netty 4.0的,在這就不再使用netty5了,netty5因為一些更為複雜的特性和沒有顯著的提高效能已經被放棄了,這裡就不再提了。
HttpHelloWorldServerHandler:
package cn.com.epoll;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.handler.codec.http.*;
import io.netty.util.AsciiString;
/**
* Created by xiaxuan on 17/11/14.
*/
public class HttpHelloWorldServerHandler extends ChannelInboundHandlerAdapter {
private static final byte[] CONTENT = { 'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd'};
private static final AsciiString CONTENT_TYPE = new AsciiString("Content-Type");
private static final AsciiString CONTENT_LENGTH = new AsciiString("Content-Length");
private static final AsciiString CONNECTION = new AsciiString("Connection");
private static final AsciiString KEEP_ALIVE = new AsciiString("keep-alive");
@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
ctx.flush();
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
if (msg instanceof HttpRequest) {
HttpRequest req = (HttpRequest) msg;
if (HttpUtil.is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE));
}
boolean keepAlive = HttpUtil.isKeepAlive(req);
FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK, Unpooled.wrappedBuffer(CONTENT));
response.headers().set(CONTENT_TYPE, "text/plain");
response.headers().set(CONTENT_LENGTH, response.content().readableBytes());
if (!keepAlive) {
ctx.write(response).addListener(ChannelFutureListener.CLOSE);
} else {
response.headers().set(CONNECTION, KEEP_ALIVE);
ctx.write(response);
}
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
}
channelRead的方法很簡單,就是判斷當前是不是http請求,是的話就輸出Hello World,功能比較簡單。
HttpHelloWorldServerInitializer:
public class HttpHelloWorldServerInitializer extends ChannelInitializer<SocketChannel> {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new HttpServerCodec());
ch.pipeline().addLast(new HttpHelloWorldServerHandler());
}
}
pipline新增Handler處理。
NioHttpHelloWorldServer:
package cn.com.epoll;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.PooledByteBufAllocator;
import io.netty.channel.Channel;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
/**
* Created by xiaxuan on 17/11/14.
*/
public class NioHttpHelloWorldServer {
private static final int PORT = Integer.parseInt(System.getProperty("port", "8080"));
public static void main(String[] args) {
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
ServerBootstrap b = new ServerBootstrap();
try {
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.childHandler(new HttpHelloWorldServerInitializer());
Channel ch = b.bind(PORT).channel();
ch.closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
普通的netty演示程式的寫法,沒什麼特殊的,但是還是需要提一下其中使用的NioEventLoopGroup,NioEventLoopGroup就是一個簡單的執行緒池排程服務,我們再追溯NioEventLoopGroup的原始碼的時候可以發現最終NioEventLoopGroup繼承的就是ScheduledExecutorService,就是有多個NioEventLoop物件的執行緒池,如果不指定執行緒池的的容量的話,預設就是當前cpu * 2的數量,轉到原始碼可以看到傳入的建構函式為0,為以下:
* Create a new instance using the default number of threads, the default {@link ThreadFactory} and
* the {@link SelectorProvider} which is returned by {@link SelectorProvider#provider()}.
*/
public NioEventLoopGroup() {
this(0);
}
但是轉到最終原始碼會發現當判斷為0的時候,去了預設值,原始碼如下:
static {
DEFAULT_EVENT_LOOP_THREADS = Math.max(1, SystemPropertyUtil.getInt(
"io.netty.eventLoopThreads", NettyRuntime.availableProcessors() * 2));
if (logger.isDebugEnabled()) {
logger.debug("-Dio.netty.eventLoopThreads: {}", DEFAULT_EVENT_LOOP_THREADS);
}
}
/**
* @see MultithreadEventExecutorGroup#MultithreadEventExecutorGroup(int, Executor, Object...)
*/
protected MultithreadEventLoopGroup(int nThreads, Executor executor, Object... args) {
super(nThreads == 0 ? DEFAULT_EVENT_LOOP_THREADS : nThreads, executor, args);
}
如上,最終取得預設值就是cpu * 2。
而這裡最終還是需要提一下NioEventLoop,NioEventLoop的父類繼承了SingleThreadEventExecutor,也是一個執行緒池排程服務,但是隻有一個單執行緒,在NioEventLoop建立的時候,同時也會建立一個Selector,selector管理channel,所以實際上NioEventLoopGroup就是一組管理Channel的執行緒池。
原始碼解析就到此未知,執行程式,在web端請求的效果如下:
十分簡單,這是nio server的執行。
epoll server
epoll server的原始碼主要在server上的不同,其他的與上相同,server如下:
package cn.com.epoll;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.PooledByteBufAllocator;
import io.netty.channel.Channel;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.epoll.EpollEventLoopGroup;
import io.netty.channel.epoll.EpollServerSocketChannel;
/**
* Created by xiaxuan on 17/11/14.
*/
public class HttpHelloWorldServer {
static final int PORT = Integer.parseInt(System.getProperty("port", "8080"));
public static void main(String[] args) {
EventLoopGroup bossGroup = new EpollEventLoopGroup(1);
EventLoopGroup workerGroup = new EpollEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.channel(EpollServerSocketChannel.class);
b.option(ChannelOption.SO_BACKLOG, 1024);
b.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
b.group(bossGroup, workerGroup)
.childHandler(new HttpHelloWorldServerInitializer());
Channel ch = b.bind(PORT).sync().channel();
ch.closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
暫時先不講Epoll和nio selector特性的不同,首先先把這裡的應用程式講完,在普通的windows或者mac上運行當前的程式會執行不起來的,會報錯為:
Exception in thread "main" java.lang.UnsatisfiedLinkError: failed to load the required native library
at io.netty.channel.epoll.Epoll.ensureAvailability(Epoll.java:78)
at io.netty.channel.epoll.EpollEventLoopGroup.<clinit>(EpollEventLoopGroup.java:38)
at cn.com.epoll.HttpHelloWorldServer.main(HttpHelloWorldServer.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.ExceptionInInitializerError
at io.netty.channel.epoll.Epoll.<clinit>(Epoll.java:33)
... 7 more
Caused by: java.lang.IllegalStateException: Only supported on Linux
at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:189)
at io.netty.channel.epoll.Native.<clinit>(Native.java:61)
... 8 more
epoll模型只有在linux kernel 2.6以上才能支援,在windows和mac都是不支援的,因此需要在linux上執行這個程式,但是本機是mac系統,因此不能在本地執行,然後本地也沒有安裝linux虛擬機器,因此便藉助了docker來使程式執行,於此同時為了方便執行maven打出的jar包,藉助了一個maven外掛以供打出一個可執行的jar包,外掛如下:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>${exec.mainClass}</mainClass>
</transformer>
</transformers>
<artifactSet>
</artifactSet>
</configuration>
</execution>
</executions>
</plugin>
然後編寫的Dockerfile檔案如下:
FROM java:8
MAINTAINER bingwenwuhen bingwenwuhen@163.com
RUN mkdir /app
COPY target/epoll-server-1.0-SNAPSHOT.jar /app
ENTRYPOINT ["java", "-jar", "app/epoll-server-1.0-SNAPSHOT.jar"]
EXPOSE 8080
我拉取的這個java:8基礎映象就是以centos為基礎映象,因此就是linux環境,在可能一些基礎映象上並非支援epoll模型的可能性上,可以直接拉取centos映象,然後配置java環境等等,在此不再詳述,這個在網上有著足夠的資料。
在maven編譯打包之後,docker進行映象構建,執行,最後執行docker ps命令,檢視容器執行情況,如下:
容器正常執行,使用curl命令請求服務,將會獲取如上一樣的結果,epoll server正常執行。
我們在檢視EpollEventLoopGroup原始碼的時候可以發現,NioEventLoopGroup和EpollEventgroup最終繼承的類都是相同,只是部分特性不同而已,因此在這就不再講述EpollEventLoopGroup的原始碼,然而epoll模型本身講述起來又相當複雜,不是本節能夠講述清楚的,對於EpollEventLoopGroup與epoll模型,以後有空再做專題詳述。
原始碼下載地址
NioEventLoopGroup的原始碼下載地址就不再給出,下面是epoll server的原始碼下載地址: