Java 實現簡單的記憶體物件LRU快取
常遇到需要將物件在記憶體中快取的場景.比如下面場景:
Android 即使通訊應用中,使用者列表,對話頁面,群聊頁面等,都會有大量的使用者資訊展示需求,至少需要 名字,頭像 等資訊,需要從伺服器獲取.
而上述頁面有高概率會再次訪問,瀏覽的使用者資訊也有高概率再次曝光.
每次曝光都請求一次伺服器顯然太浪費,但將全部使用者資訊都快取下來肯定也不合適,因此需要快取使用者資訊.
於是自己做了一個簡單的基於 LRU 規則的記憶體快取容器.
LRU 即 Last Recent Used, 大意是最近被使用的物件最後被丟棄.
先上原始碼再扯其他:
原始碼:
package lx.af.utils.cache; import android.support.annotation.NonNull; import java.util.Collections; import java.util.HashMap; import java.util.LinkedList; /** * author: lx * date: 16-6-1 * * simple memory object LRU cache. based on HashMap. * usage of the cache is simple: * specify a max object count and a purge threshold, then the cache is ready to go. * or you can specify the max count, and the purge threshold will be set to max * 0.75. * * when to purge the cache: * when add objects to the cache, total object count will be checked against max count. * if exceeded, objects will be deleted due to access time order until count reaches threshold. * access time of objects will be recorded on put() and get(), and items with shorter * access time will get purged first. * the purge operation will be done automatically. * * this class is threadsafe. * * @param <K> the type of keys maintained by cache. rule is the same as of HashMap. * @param <V> the type of mapped values. rule is the same as of HashMap. */ public class SimpleMemLruCache<K, V> { private HashMap<K, Wrapper<K, V>> mMap; private int mMax; private int mThreshold; private final Object mLock = new Object(); /** * create cache with max object count. * the threshold will be set to max * 0.75. * @param max max object count of the cache, exceed which will trigger object purge. */ public SimpleMemLruCache(int max) { this(max, (int) (max * 0.75f)); } /** * create cache with both max object count and object purge threshold. * threshold should not be greater then max, or else exception will be thrown. * @param max max object count of the cache, exceed which will trigger object purge. * @param threshold purge threshold: when object exceeds max count, object deletion * will be triggered. oldest object will get deleted first, * until object count reaches threshold. */ public SimpleMemLruCache(int max, int threshold) { if (threshold > max) { throw new IllegalArgumentException("threshold should not be greater than max"); } mMax = max; mThreshold = threshold; mMap = new HashMap<>(max + 3); } /** * put object to cache. * when called, the object's last access time will be set to current time. * @param key key, as in {@link HashMap#put(Object, Object)} * @param value value, as in {@link HashMap#put(Object, Object)} */ public void put(K key, V value) { synchronized (mLock) { mMap.put(key, new Wrapper<>(key, value)); } checkPrune(); } /** * get object from cache. * when called, the object's last access time will be updated to current time. * @param key key, as in {@link HashMap#get(Object)} * @return value, as in {@link HashMap#get(Object)} */ public V get(K key) { synchronized (mLock) { Wrapper<K, V> wrapper = mMap.get(key); if (wrapper != null) { wrapper.update(); return wrapper.obj; } } return null; } /** * clear all cached objects. */ public void clear() { synchronized (mLock) { mMap.clear(); } } // check and purge objects private void checkPrune() { if (mMap.size() < mMax) { return; } synchronized (mLock) { // use list to sort the map first LinkedList<Wrapper<K, V>> list = new LinkedList<>(); list.addAll(mMap.values()); Collections.sort(list); // delete oldest objects for (int i = list.size() - 1; i >= mThreshold; i --) { Wrapper<K, V> wrapper = list.get(i); mMap.remove(wrapper.key); } list.clear(); } } // wrapper class to record object's last access time. private static class Wrapper<K, V> implements Comparable<Wrapper> { K key; V obj; long updateTime; Wrapper(K key, V obj) { this.key = key; this.obj = obj; this.updateTime = System.currentTimeMillis(); } void update() { updateTime = System.currentTimeMillis(); } @Override public int compareTo(@NonNull Wrapper another) { return (int) (updateTime - another.updateTime); } } }
思路:
主要思路是,將 HashMap 作為記憶體容器, object 以鍵值對的形式存入這個 HashMap.
當然存入 HashMap 時, value 被套了一層,以便記錄最後訪問時間.
快取的構造接受兩個引數: 最大物件數, 和清理物件時的物件數的閾值.
然後物件就可以不斷存入. 每次有存入物件操作時,就會檢查一次 HashMap 中的物件數是否超過了最大值.
如果超過最大值,則開始清理物件,直到物件數達到閾值.
這裡用了兩個引數來限定物件數目,是基於效率考慮:
如果只有一個物件最大數限制,那麼在存入物件到達該最大數後,幾乎每次存入都需要到快取裡找到那個最老的物件並刪除之.
所以這裡設定了一個閾值,比如最大物件數為100,閾值為75.那麼在每次物件超過100時,就啟動刪除流程,刪除到物件數為75.這樣在接下來的25個物件存入動作中,都不會再觸發刪除操作,效率明顯提高.
示例:
基於開篇提到的使用場景,大體使用方法如下:// 建立快取,最大物件數為100,閾值為75 SimpleMemLruCache<String, UserInfo> mUserCache = new SimpleMemLruCache<>(100, 75); // 獲取使用者資訊時,首先檢視快取內是否存在,不存在則請求伺服器 String userId = "some_user_id"; UserInfo user = mUserCache.get(userId); if (user == null) { user = UserInfoRequest.get(); // 將請求到的使用者資訊放入快取 mUserCache.put(userId, user); } displayUserInfo(user);
改進:
這個快取實現是個超級簡單的快取實現,適用場景有限,有很多待改進的地方.
比如 LRU 演算法一般不止會考慮更新時間那麼簡單,還會考慮被使用次數等.
比如這個cache只適用於物件數較少時,比如要存一萬個物件的話,清理快取的工作應該放線上程中去做.
清理操作放線上程中的話,同步也是問題.這樣就要實現分步清理.
後期還需改進.