ZOJ 1117 Entropy(哈夫曼樹)
Background
An entropy encoder is a data encoding method that achieves lossless data compression by encoding a message with ��wasted�� or ��extra�� information removed. In other words, entropy encoding removes information that was not necessary in the first place to accurately
encode the message. A high degree of entropy implies a message with a great deal of wasted information; english text encoded in ASCII is an example of a message type that has very high entropy. Already compressed messages, such as JPEG graphics or ZIP archives,
have very little entropy and do not benefit from further attempts at entropy encoding.
English text encoded in ASCII has a high degree of entropy because all characters are encoded using the same number of bits, eight. It is a known fact that the letters E, L, N, R, S and T occur at a considerably higher frequency than do most other letters in
english text. If a way could be found to encode just these letters with four bits, then the new encoding would be smaller, would contain all the original information, and would have less entropy. ASCII uses a fixed number of bits for a reason, however: it��s
easy, since one is always dealing with a fixed number of bits to represent each possible glyph or character. How would an encoding scheme that used four bits for the above letters be able to distinguish between the four-bit codes and eight-bit codes? This
seemingly difficult problem is solved using what is known as a ��prefix-free variable-length�� encoding.
In such an encoding, any number of bits can be used to represent any glyph, and glyphs not present in the message are simply not encoded. However, in order to be able to recover the information, no bit pattern that encodes a glyph is allowed to be the prefix
of any other encoding bit pattern. This allows the encoded bitstream to be read bit by bit, and whenever a set of bits is encountered that represents a glyph, that glyph can be decoded. If the prefix-free constraint was not enforced, then such a decoding would
be impossible.
Consider the text ��AAAAABCD��. Using ASCII, encoding this would require 64 bits. If, instead, we encode ��A�� with the bit pattern ��00��, ��B�� with ��01��, ��C�� with ��10��, and ��D�� with ��11�� then we can encode this text in only 16 bits; the resulting
bit pattern would be ��0000000000011011��. This is still a fixed-length encoding, however; we��re using two bits per glyph instead of eight. Since the glyph ��A�� occurs with greater frequency, could we do better by encoding it with fewer bits? In fact we
can, but in order to maintain a prefix-free encoding, some of the other bit patterns will become longer than two bits. An optimal encoding is to encode ��A�� with ��0��, ��B�� with ��10��, ��C�� with ��110��, and ��D�� with ��111��. (This is clearly not the
only optimal encoding, as it is obvious that the encodings for B, C and D could be interchanged freely for any given encoding without increasing the size of the final encoded message.) Using this encoding, the message encodes in only 13 bits to ��0000010110111��,
a compression ratio of 4.9 to 1 (that is, each bit in the final encoded message represents as much information as did 4.9 bits in the original encoding). Read through this bit pattern from left to right and you��ll see that the prefix-free encoding makes it
simple to decode this into the original text even though the codes have varying bit lengths.
As a second example, consider the text ��THE CAT IN THE HAT��. In this text, the letter ��T�� and the space character both occur with the highest frequency, so they will clearly have the shortest encoding bit patterns in an optimal encoding. The letters ��C��,
��I�� and ��N�� only occur once, however, so they will have the longest codes.
There are many possible sets of prefix-free variable-length bit patterns that would yield the optimal encoding, that is, that would allow the text to be encoded in the fewest number of bits. One such optimal encoding is to encode spaces with ��00��, ��A�� with
��100��, ��C�� with ��1110��, ��E�� with ��1111��, ��H�� with ��110��, ��I�� with ��1010��, ��N�� with ��1011�� and ��T�� with ��01��. The optimal encoding therefore requires only 51 bits compared to the 144 that would be necessary to encode the message with
8-bit ASCII encoding, a compression ratio of 2.8 to 1.
Input
The input file will contain a list of text strings, one per line. The text strings will consist only of uppercase alphanumeric characters and underscores (which are used in place of spaces). The end of the input will be signalled by a line containing only the
word ��END�� as the text string. This line should not be processed.
Output
For each text string in the input, output the length in bits of the 8-bit ASCII encoding, the length in bits of an optimal prefix-free variable-length encoding, and the compression ratio accurate to one decimal point.
Example
Input
AAAAABCD THE_CAT_IN_THE_HAT ENDOutput
64 13 4.9 144 51 2.8Source:
直接哈夫曼樹即可。
程式碼:
#include <stdio.h>
#include <string.h>
struct {
int lchild, rchild, parent;
}tree[64];
char text[100];
int freq[64];
int optimal;
void entropy(int n, int d)
{
if(n<0) return;
if(n<31) optimal += freq[n]*d;
else
{
entropy(tree[n].lchild, ++d);//左子樹碼長加1
entropy(tree[n].rchild, d);//右子樹碼長與左子樹相同
}
}
int main()
{
int i,j;
int left,right;
while(gets(text))
{
if (strcmp(text,"END")==0) break;
int length = strlen(text);
memset(tree, -1, sizeof(tree));
memset(freq, 0, sizeof(freq));
for(i = 0; i<length; i++)
if (text[i]=='_') freq[26]++;
else freq[text[i]-'A']++;
int node = 30;
while(1)
{
int min = 1000;
//查詢第一個權重最小的結點
for(j = 0; j<=node; j++)
if(tree[j].parent==-1)
if(min>freq[j] && freq[j])
{
min = freq[j];
left = j;
}
//查詢第二個權重最小的結點
min = 1000;
for(j = 0; j<=node; j++)
if(tree[j].parent==-1)
if(j!=left && min>freq[j] && freq[j])
{
min = freq[j];
right = j;
}
if (min==1000) break;//已構成一棵樹
freq[++node] = freq[left] + freq[right];//權重
//構造哈夫曼樹
tree[node].lchild = left;
tree[node].rchild = right;
tree[node].parent = -1;
tree[left].parent = node;
tree[right].parent = node;
}
optimal = 0;
if(node==30) optimal = length;
else entropy(node,0);
length *= 8;
printf("%d %d %.1lf\n", length, optimal, 1.0*length/optimal);
}
return 0;
}
相關推薦
ZOJ 1117 Entropy(哈夫曼樹)
Entropy Time Limit: 2 Seconds Memory Limit: 65536 KB Background An entropy encoder is a data encoding method that achieves lossles
hdu1053 Entropy(哈夫曼樹)
複習了一天哈夫曼樹。。。 去年只是學了哈夫曼的構建,但不懂這樹的含義,今天想了好久,真的好厲害一棵樹啊! 一個普通的字串,竟然可以轉變為帶權值的樹。我許久不能理解的是為什麼字元出現次數可以用權值來表達
Entropy (哈夫曼樹)
題目連結 思路 程式碼 思路 純資料結構。 程式碼 #include <iostream> #include <cstdio> #include &l
Greedy——HDUOJ 1553 - Entropy(哈夫曼樹求解)
原題: Problem Description 哈夫曼解釋……..(內容過多) Sample Input AAAAABCD THE_CAT_IN_THE_HAT END Sample Output
小專案-檔案壓縮(哈夫曼樹)
先回顧一下哈夫曼樹 huffman樹即最優二叉樹,是加權路徑長度最短的二叉樹。哈夫曼樹的樹使用貪心演算法。 每次選擇該集合中權值最小的兩個作為葉子結點,父親節點的權值為葉子節點權值之和。然後又將其父親重新放進此集合裡。重複前面的做法,直到完成哈夫曼樹的建
最優二叉樹(哈夫曼樹)知識點
路徑:在一棵樹中從一個結點往下到孩子或孫子結點之間的通路 結點的路徑長度:從根節點到該節點的路徑上分支的數目 樹的路徑長度:樹中每個結點的路徑長度之和 結點的權:給樹中的結點賦予一個某種含義的值,則該
4198: [Noi2015]荷馬史詩 (哈夫曼樹基礎)
如何選擇 是否 scrip print for scanf 表示 stat tor 一、題目概述 4198: [Noi2015]荷馬史詩 Time Limit: 10 Sec Memory Limit: 512 MBSubmit: 1545 Solved: 818[Su
HDU 1053 Entropy(哈夫曼編碼 貪心+優先隊列)
req else archive there format printf mod imp phi 傳送門: http://acm.hdu.edu.cn/showproblem.php?pid=1053 Entropy Time Limit: 2000/1000 MS (Ja
Fence Repair(哈夫曼樹+優先佇列)
D - Fence Repair Time Limit:2000MS Memory Limit:65536KB 64bit IO Format:%I64d & %I64u Submit Status Practice POJ 3253 u
HuffmanTree哈夫曼樹(赫夫曼樹)及哈夫曼編碼
今天帶領大家學一下哈夫曼 一. 概念: 赫夫曼樹又叫做最優二叉樹,它的特點是帶權路徑最短。 1)路徑:路徑是指從樹中一個結點到另一個結點的分支所構成的路線, 2)路徑長度:路徑長度是指路徑上的分支數目。 3)樹的路徑長度:樹的路徑長度是指從根到每個結點的路徑長度之和
H StarCraft (哈夫曼樹思想)
題目題意:給你n個建築和他們所需的建築時間。初始給你m個人,有兩個用途。1.去建建築,花費他們所需的時間,然後死亡。2.去分裂成2個人,需要k個時間,然後死亡。問你建完所有建築需要的最短時間。POINT:首先要知道的是,如果n>m時,我們恰好造出n個人是最優的。把人的關
資料結構之二叉樹應用(哈夫曼樹及哈夫曼編碼實現)(C++)
一、哈夫曼樹1.書上用的是靜態連結串列實現,本文中的哈夫曼樹用 排序連結串列 實現;2.實現了從 字元頻率統計、構建權值集合、建立哈夫曼樹、生成哈夫曼編碼,最後對 給定字串的編碼、解碼功能。3.使用到的 “SortedList.h”標頭檔案,在上篇博文:資料結構之排序單鏈表。
CCF 2016-12 04編碼(DP+哈夫曼樹)
題目:給定一段文字,已知單詞a1, a2, …, an出現的頻率分別t1, t2, …, tn。可以用01串給這些單詞編碼,即將每個單詞與一個01串對應,使得任何一個單詞的編碼(對應的01串)不是另
電文的編碼和譯碼(哈夫曼樹的應用)
一、 實驗環境學寶虛擬機器,VC6.0二、 實驗目的從鍵盤接收一串電文字元,輸出對應的哈夫曼編碼,同時能翻譯哈夫曼編碼生成的程式碼串,輸出對應的電文字元。三、 實驗內容1.用C語言實現二叉樹的鏈式(二叉連結串列)儲存結構;2.實現二叉
Java 樹結構實際應用 二(哈夫曼樹和哈夫曼編碼)
赫夫曼樹 1 基本介紹 1) 給定 n 個權值作為 n 個葉子結點,構造一棵二叉樹,若該樹的帶權路徑長度(wpl)達到最小,稱這樣的二叉樹為 最優二叉樹,也稱為哈夫曼樹(Huffman Tree), 還有的書翻譯為霍夫曼樹。 2) 赫夫曼樹是帶權路徑長度最短的樹,權值較大的結點離根較近  
sgu-203 Hyperhuffman(哈夫曼編碼)
move include node ins time expect 一個 set tween Hyperhuffman You might have heard about Huffman encoding - that is the coding system that
最優二叉樹(赫夫曼樹)的構建
一、構建最優二叉樹 ①、節點類:五個屬性:結點的資料、父結點、左子結點、右子結點、赫夫曼編碼 /** * 樹的結點類 * * @author lenovo * */ public class TreeNode { private Object obj;
滿二叉樹、完全二叉樹、最優二叉樹(赫夫曼樹)、二叉排序樹、二叉判定樹
二叉排序樹(Binary Sort Tree)又稱二叉查詢樹。 它或者是一棵空樹;或者是具有下列性質的二叉樹: (1)若左子樹不空,則左子樹上所有結點的值均小於它的根結點的值; (2)若右子樹不空,則右子樹上所有結點的值均大於它的根結點的值; (3)左、右子樹也分別為二叉排序樹;
貪心演算法——Huffman編碼(哈夫曼編碼)
注:實現Huffman編碼是用貪心演算法來實現的,證明Huffman的貪心選擇和最優子結構很麻煩,我沒有看懂(演算法導論.中文版P234),這裡只是給出了實現Huffman編碼的實現程式碼。實現Huffman最好的資料結構時優先順序佇列(可以通過最小堆來實現)。整個演算法的時
字元編碼(哈夫曼編碼)
Question 請設計一個演算法,給一個字串進行二進位制編碼,使得編碼後字串的長度最短。 Algorithm 哈夫曼編碼,權為各個字元出現的頻率,再借助小根堆計算。 result=詞頻1*