文章将如下部分介绍
- Volley的简介
- 根据一个Http请求探究Volley到底是怎么工作的
一,Volley简介
Volley是一个用于Android应用更简单和高效访问网络的HTPP库.在2013年Google I/O大会上推出.
功能如下:
- 自动的调度网络请求
- 多并发网络连接
- 透明的disk和memory缓存响应
- 支持请求的优先级
- 支持请求的取消(单个请求和多个请求)
- 便于定制开发
... ...
说了这么多好处,难道没有缺点吗? 有的,正如网上所说的Volley适合数据量小和请求频繁的操作.
Volley不适合大的下载和流操作,原因是Volley在网络请求得到的数据进行解析时,所有的响应数据都是在内存中的.
以上引自官网介绍
二,根据一个Http请求探究Volley到底是怎么工作的
下面给出一个简单的Json请求
private void getWeatherData() {
RequestQueue requestQueue = Volley.newRequestQueue(this);
JsonObjectRequest request = new JsonObjectRequest(Request.Method.GET,"http://api.map.baidu.com/telematics/v3/weather?location=北京&output=json&ak=6tYzTvGZSOpYB5Oc2YGGOKt8", null,new Response.Listener<JSONObject>() {
@Override
public void onResponse(JSONObject response) {
Log.i(TAG, "onResponse: " + response.toString());
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
Log.i(TAG, "onErrorResponse: "+error.toString());
}
});
requestQueue.add(request);
}
初始化一个请求队列,创建一个Json请求对象, 然后将这个请求添加到请求队列中,然后就可以在void onResponse(JSONObject response)回调方法中获取从网络得到的数据.
好了,下面咱们开始一步一步从Volley源码点击角度来分析他到底做了什么
1,Volley.java是一个静态工具类,里面的方法只有一个功能,初始化网络请求队列---new RequestQueue(...).
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}
/**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}
1)首先他会创建一个缓存目录
2)接下来,如果指定未指定访问网络的库HttpStack stack,那么就根据系统版本判断,若SDK大于等于9,则使用HurlStack,底层是HttpUrlConnection,小于9则使用HttpClientStack,底层是HttpClient.原因是HttpURLConnection在SDK小于9时对一个可读的InputStream调用close()方法时,就有可能会导致连接池失效了详见官网
3)然后创建NetWork对象,根据上面选择的HurlStack和HttpClientStack来处理处理网络请求.
4)最后创建一个请求队列,调用start(),开始队列的分发器.
2,RequestQueue调用start()做了什么呢
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
消息队列的启动主要做了两件事儿,
1)创建缓存分发器CacheDispatcher对象并开启,用于处理已经存在且得到数据结果的重复请求
2)创建网络分发器NetworkDispatcher对象并开启,用于从网络获取数据
CacheDispatcher和NetworkDispatcher都是Thread类的子类,默认缓存 分发器线程1个,网络分发器线程4个.
3,请求队列中添加请求
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}
首先 request.setRequestQueue(this)将Request对象与请求队列关联
然后判断是否需要将Request结果缓存,若不需要,直接将改请求放到网络请求队列mNetworkQueue中
否则,获取Request对象的缓存key, 接着查看mWaitingRequests等待请求且需要缓存的队列中是否包含,mWaitingRequests是Map<String, Queue<Request<?>>> 类型的,用来存放相同缓存key的所有请求.
如果mWaitingRequests包含缓存key,则将改Request对象放到对应缓存key的队列中.否则, mWaitingRequest存放key对应的空对象,然后将Request对象放到缓存队列中.
总之,当添加一个请求时,如果不需要缓存结果,直接放到网络请求队列.
如果需要,首先会判断mWaitingRequests是否已经有相同缓存key的请求队列存在,没有则放到缓存队列, 有则存放到mWaitingRequests对应缓存key的队列中,当一个相同的缓存key的Request对象请求数据完毕后,会将他们都转入到缓存队列中.
好了,截止目前,请求队列RequestQueue已经创建好了, 缓存和网络分发器线程也已经开启了,那么请求是怎么被执行的呢?
分为两部介绍,一是缓存分发器的处理,二是网络分发器的处理
这两个分发器的设计思想类似生产者--消费者, 一个缓存线程+一个缓存阻塞队列, 四个网络线程+一个网络阻塞队列, Request对象到来添加到缓存阻塞队列或者网络阻塞队列, 线程不断的获取Request对象来执行请求.
阻塞队列的原理可以参考实现一个自定义的有界阻塞队列
4,处理网络请求
(一)缓存分发器的处理
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
// Make a blocking call to initialize the cache.
mCache.initialize();
while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
final Request<?> request = mCacheQueue.take();
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}
上面我们已经提到缓存分发器CacheDispatcher是Thread的一个子类,我们就看他的run()方法到底是做了什么
run()方法中有一个死循环,保证线程不会结束
首先从阻塞缓存队列mCacheQueue中取出一个请求,如果队列为空则阻塞.
接着判断这个请求是否被取消,若取消则continue继续取下一个
然后尝试通过请求request的缓存key来获取缓存数据,
得到的数据如果为空则将这个请求转到网络请求队列中从网络获取数据,然后结束当前操作取下一个请求
如果得到数据不为空,则判断缓存数据是否过期,若过期,则将这个请求转到网络请求队列中从网络获取数据,然后结束当前操作取下一个请求
如果缓存不过期,则将将缓存结果转换为Response<?>对象
最后在此判断这个缓存是否需要刷新,若不需要ResponseDelivery将Response返回,否则则把这个请求放到网络请求队列中
至此,缓存分发器的主要工作内容已经分析完毕
(二)网络分发器的处理
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
while (true) {
long startTimeMs = SystemClock.elapsedRealtime();
Request<?> request;
try {
// Take a request from the queue.
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}
addTrafficStatsTag(request);
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
VolleyError volleyError = new VolleyError(e);
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
mDelivery.postError(request, volleyError);
}
}
}
同样的,网络分发器NetworkDispatcher也是一个Thread的子类,它run()方法也是一个死循环,避免线程的退出.
首先他会从网络阻塞请求队列中取出一个网络请求,如果队列为空则阻塞.
取得网络请求后,会首先判断该请求是否取消
若无,则使用1中初始化的HurlStack或者HttpClientStack获取网络数据,数据封装成NetworkResponse对象,然后在解析转换为Response<?>对象,接下来判断数据是否为空以及请求数据是否需要缓存来决定是否存储结果
最后ResponseDelivery将Response返回
下面引用官网的一个流程图来总结请求过程
5,请求结果返回
不管是通过CacheDispatcher还是NetworkDispatcher得到的网络结果,到最后都是通过ExecutorDelivery的postResponse(...)方法将结果打回主线程
- 1)如何将结果打回主线程?
代码如下
//RequestQueue.java
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
//ExecutorDelivery.java
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}
在RequestQueue构造函数中构造处理返回结果ExecutorDelivery对象时, 当ExecutorDelivery传入的Handler初始化时,将主线程的Looper传给他,这样这个Handler就关联到了主线程的消息队列, 不论是send Message还是post Runable都将发送到主线程中
- 2)着重分析一下结果返回的流程
由上可知,不管是通过CacheDispatcher还是NetworkDispatcher得到的网络结果,到最后都是通过ExecutorDelivery的postResponse(...)方法将结果打回主线程
@Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
private class ResponseDeliveryRunnable implements Runnable {
private final Request mRequest;
private final Response mResponse;
private final Runnable mRunnable;
public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
mRequest = request;
mResponse = response;
mRunnable = runnable;
}
@SuppressWarnings("unchecked")
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) {
mRequest.finish("canceled-at-delivery");
return;
}
// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}
// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) {
mRunnable.run();
}
}
}
我们着重看下上面的run()方法
**A) ** 不管Request请求取消或者完成,都会执行Request对象的finish()方法
void finish(final String tag) {
if (mRequestQueue != null) {
mRequestQueue.finish(this);
}
... ...
}
Request对象的finish()方法调用RequestQueue的finish()方法
<T> void finish(Request<T> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request);
}
synchronized (mFinishedListeners) {
for (RequestFinishedListener<T> listener : mFinishedListeners) {
listener.onRequestFinished(request);
}
}
if (request.shouldCache()) {
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests);
}
}
}
}
主要做了两件事情
1)从 mCurrentRequests(Set<Request<?>>类型)中移除当前Request对象
2)判断如果该请求是要求缓存的,从3中所说的mWaitingRequests中移除该请求缓存key对应的队列,然后将其加入到缓存队列中,这样这些重复的请求从CacheDispatcher中处理
B) mRequest.deliverResponse(mResponse.result);
对应的Request子类对象(eg,ImageRequest, JsonObjectRequest,StringRequest等)调用对应的 deliverResponse(mResponse.result)方法,这里我们以JsonRequest为例
protected void deliverResponse(T response) {
mListener.onResponse(response);
}
看到这个onResponse(response)方法, 熟悉吗? 是的,这就是我们在创建Request对象的回调方法.
这样从发起一个网络请求到主线程收到结果的流程都分析完毕