springboot專案練習八 獲取新聞json資料
阿新 • • 發佈:2019-01-12
- 在檢視完資料介面後網易新聞的連結地址返回的json資料是根據時間動態生成的
- 在專案查詢列表上增加一個按鈕修改解析json資料的方法完成資料的增加
在頁面頂部增加獲取資料的按鈕
<td style="width:150px"></td> <td style="margin-left:20px"><input type="button" style="width:60px" class="l-form-buttons" id="getJson" name="獲取json資料" value="獲取json資料"></td>
繫結點選事件
$("#getJson").click(function(){
var url ="/news/getJson";
$.post(url,{},function(res){
if(res.code==200){
$.ligerDialog.tip({icon: 'succeed', time: 1, content:res.msg});
}else{
$.ligerDialog.tip(res.msg);
}
});
});
增加獲取資料的方法,為了儲存生成的json檔案,將獲取資料儲存至資料夾,在將檔名稱返回,修改原解析json資料的方法,使用軟編碼的方式將檔案存放的目錄地址和請求連結地址配置在appilication.properties檔案中
news.json.url=http://c.m.163.com/nc/article/headline/T1348647853363/0-100.html
news.json.dir=F:/springboot/springboot_solr/src/main/resources/json/
在newController中通過${}符和Value註解將application.properties檔案中定義的值賦值給jsonUrl和fileDir變數
@Value("${news.json.url}") private String jsonUrl; @Value("${news.json.dir}") private String fileDir; @RequestMapping("getJson") @ResponseBody public ResultData getJson(){ try{ String fileName = GetNewsJson.getJsonData(jsonUrl,fileDir); List<NewDoc> list =JsonUtils.importDataToSolr(fileName); newDocSolr.addList(list); new ResultData("200","","成功"); }catch(Exception e){ e.printStackTrace(); return new ResultData("-1","","失敗"); } return new ResultData("200","","成功"); }
新建獲取json資料的類GetNewsJson
package com.gc.utils;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.net.URL;
import java.net.URLConnection;
import java.util.Random;
import org.springframework.stereotype.Component;
@Component
public class GetNewsJson {
public static String getJsonData(String jsonUrl,String fileDir) throws Exception {
URL url = new URL(jsonUrl);
URLConnection connection = url.openConnection();
connection.addRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36");
connection.connect();
InputStream inputStream = connection.getInputStream();
BufferedReader re = new BufferedReader(new InputStreamReader(inputStream, "utf-8"));
String json="";
String line ="";
while((line=re.readLine())!=null){
json += line;
}
String fileName = fileDir+System.currentTimeMillis()+new Random().nextInt(1000)+".json";
System.out.println(json);
BufferedWriter wr = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(new File(fileName)),"utf-8"));
wr.write(json);
wr.flush();
inputStream.close();
wr.close();
return fileName;
}
}
最後一步修改原解析json資料的方法增加入參檔名,並進行非空判斷
public class JsonUtils {
public static List<NewDoc> importDataToSolr(String fileName){
InputStream in =null;
BufferedReader br=null;
List<NewDoc> list=null;
try {
if(StringUtils.isEmpty(fileName)){
in = new FileInputStream(new File("F:\\springboot\\springboot_solr\\src\\main\\resources\\news.json"));
}else{
in = new FileInputStream(new File(fileName));
}
br = new BufferedReader(new InputStreamReader(in));
String line;
StringBuffer strb = new StringBuffer();
while ((line = br.readLine()) != null) {
strb.append(line);
}
ObjectMapper mapper = new ObjectMapper();
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
JsonNode jsonNode = mapper.readTree(strb.toString());
JsonNode root = jsonNode.get("T1348647853363");
if(root.isArray()){
list= mapper.readValue(root.toString(), new TypeReference<List<NewDoc>>() {});
}
if(list!=null){
for (NewDoc newDoc : list) {
newDoc.setId(String.valueOf(System.currentTimeMillis())+new Random().nextInt(1000));
System.out.println(newDoc.toString());
}
}
} catch (Exception e) {
e.printStackTrace();
}finally {
try {
if(br!=null){
br.close();
}
if(in!=null){
in.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
return list;
}
完成的效果圖如下所示:
資料從原80增加到100,這個資料介面的資料是實時更新的,免去了獲取資料的問題,考慮爬蟲的話可以藉助htppclient+jsoup進行頁面元素的篩選,完成資料的收集。
下面將配置log4j.properties檔案完成日誌的列印,和記錄。