php抓取網頁內容彙總
①、使用php獲取網頁內容
http://hi.baidu.com/quqiufeng/blog/item/7e86fb3f40b598c67d1e7150.html
header("Content-type: text/html; charset=utf-8");
1、
$xhr = new COM("MSXML2.XMLHTTP");
$xhr->open("GET","http://localhost/xxx.php?id=2",false);
$xhr->send();
echo $xhr->responseText
2、file_get_contents實現
<?php
$url="http://www.blogjava.net/pts";
echo file_get_contents( $url );
?>
3、fopen()實現
<?
if ($stream = fopen('http://www.sohu.com', 'r')) {
// print all the page starting at the offset 10
echo stream_get_contents($stream, -1, 10);
fclose($stream);
}
if ($stream = fopen('http://www.sohu.net', 'r')) {
// print the first 5 bytes
echo stream_get_contents($stream, 5);
fclose($stream);
}
?>
②、使用php獲取網頁內容
http://www.blogjava.net/pts/archive/2007/08/26/99188.html
簡單的做法:
<?php
$url="http://www.blogjava.net/pts";
echo file_get_contents( $url );
?>
或者:
<?
if ($stream = fopen('http://www.sohu.com', 'r')) {
// print all the page starting at the offset 10
echo stream_get_contents($stream, -1, 10);
fclose($stream);
}
if ($stream = fopen('http://www.sohu.net', 'r')) {
// print the first 5 bytes
echo stream_get_contents($stream, 5);
fclose($stream);
}
?>
③、PHP獲取網站內容,儲存為TXT檔案原始碼
http://blog.chinaunix.net/u1/44325/showart_348444.html
<?
$my_book_url='http://book.yunxiaoge.com/files/article/html/4/4550/index.html';
ereg("http://book.yunxiaoge.com/files/article/html/[0-9]+/[0-9]+/",$my_book_url,$myBook);
$my_book_txt=$myBook[0];
$file_handle = fopen($my_book_url, "r");//讀取檔案
unlink("test.txt");
while (!feof($file_handle)) { //迴圈到檔案結束
$line = fgets($file_handle); //讀取一行檔案
$line1=ereg("href=\"[0-9]+.html",$line,$reg); //分析檔案內部書的文章頁面
$handle = fopen("test.txt", 'a');
if ($line1) {
$my_book_txt_url=$reg[0]; //另外賦值,給抓取分析做準備
$my_book_txt_url=str_replace("href=\"","",$my_book_txt_url);
$my_book_txt_over_url="$my_book_txt$my_book_txt_url"; //轉換為抓取地址
echo "$my_book_txt_over_url</p>"; //顯示工作狀態
$file_handle_txt = fopen($my_book_txt_over_url, "r"); //讀取轉換後的抓取地址
while (!feof($file_handle_txt)) {
$line_txt = fgets($file_handle_txt);
$line1=ereg("^ .+",$line_txt,$reg); //根據抓取內容標示抓取
$my_over_txt=$reg[0];
$my_over_txt=str_replace(" "," ",$my_over_txt); //過濾字元
$my_over_txt=str_replace("<br />","",$my_over_txt);
$my_over_txt=str_replace("<script. language=\"
$my_over_txt=str_replace(""","",$my_over_txt);
if ($line1) {
$handle1=fwrite($handle,"$my_over_txt\n"); //寫入檔案
}
}
}
}
fclose($file_handle_txt);
fclose($handle);
fclose($file_handle); //關閉檔案
echo "完成</p>";
?>
下面是比較囂張的方法。
這裡使用一個名叫Snoopy的類。
先是在這裡看到的:
PHP中獲取網頁內容的Snoopy包
http://blog.declab.com/read.php/27.htm
然後是Snoopy的官網:
http://sourceforge.net/projects/snoopy/
這裡有一些簡單的說明:
今天才發現這個好東西,趕緊去下載了來看看,是用的parse_url
還是比較習慣curl
snoopy是一個php類,用來模仿web瀏覽器的功能,它能完成獲取網頁內容和傳送表單的任務。
下面是它的一些特徵:
1、方便抓取網頁的內容
2、方便抓取網頁的文字(去掉HTML程式碼)
3、方便抓取網頁的連結
4、支援代理主機
5、支援基本的使用者/密碼認證模式
6、支援自定義使用者agent,referer,cookies和header內容
7、支援瀏覽器轉向,並能控制轉向深度
8、能把網頁中的連結擴充套件成高質量的url(預設)
9、方便提交資料並且獲取返回值
10、支援跟蹤HTML框架(v0.92增加)
11、支援再轉向的時候傳遞cookies
具體使用請看下載檔案中的說明。
$snoopy=newSnoopy;
$snoopy->fetchform(“http://www.phpx.com/happy/logging.php?action=login“);
print$snoopy->results;
?> <?phpinclude“Snoopy.class.php“;
$snoopy=newSnoopy;
$submit_url=“http://www.phpx.com/happy/logging.php?action=login“;$submit_vars["loginmode"]=“normal“;
$submit_vars["styleid"]=“1“;
$submit_vars["cookietime"]=“315360000“;
$submit_vars["loginfield"]=“username“;
$submit_vars["username"]=“********“;//你的使用者名稱$submit_vars["password"]=“*******“;//你的密碼$submit_vars["questionid"]=“0“;
$submit_vars["answer"]=“”;
$submit_vars["loginsubmit"]=“提 交“;
$snoopy->submit($submit_url,$submit_vars);
print$snoopy->results;?>
下面是Snoopy的Readme
NAME:
Snoopy - the PHP net client v1.2.4
SYNOPSIS:
include "Snoopy.class.php";
$snoopy = new Snoopy;
$snoopy->fetchtext("http://www.php.net/");
print $snoopy->results;
$snoopy->fetchlinks("http://www.phpbuilder.com/");
print $snoopy->results;
$submit_url = "http://lnk.ispi.net/texis/scripts/msearch/netsearch.html";
$submit_vars["q"] = "amiga";
$submit_vars["submit"] = "Search!";
$submit_vars["searchhost"] = "Altavista";
$snoopy->submit($submit_url,$submit_vars);
print $snoopy->results;
$snoopy->maxframes=5;
$snoopy->fetch("http://www.ispi.net/");
echo "<PRE>\n";
echo htmlentities($snoopy->results[0]);
echo htmlentities($snoopy->results[1]);
echo htmlentities($snoopy->results[2]);
echo "</PRE>\n";
$snoopy->fetchform("http://www.altavista.com");
print $snoopy->results;
DESCRIPTION:
What is Snoopy?
Snoopy is a PHP class that simulates a web browser. It automates the
task of retrieving web page content and posting forms, for example.
Some of Snoopy's features:
* easily fetch the contents of a web page
* easily fetch the text from a web page (strip html tags)
* easily fetch the the links from a web page
* supports proxy hosts
* supports basic user/pass authentication
* supports setting user_agent, referer, cookies and header content
* supports browser redirects, and controlled depth of redirects
* expands fetched links to fully qualified URLs (default)
* easily submit form. data and retrieve the results
* supports following html frames (added v0.92)
* supports passing cookies on redirects (added v0.92)
REQUIREMENTS:
Snoopy requires PHP with PCRE (Perl Compatible Regular Expressions),
which should be PHP 3.0.9 and up. For read timeout support, it requires
PHP 4 Beta 4 or later. Snoopy was developed and tested with PHP 3.0.12.
CLASS METHODS:
fetch($URI)
-----------
This is the method used for fetching the contents of a web page.
$URI is the fully qualified URL of the page to fetch.
The results of the fetch are stored in $this->results.
If you are fetching frames, then $this->results
contains each frame. fetched in an array.
fetchtext($URI)
---------------
This behaves exactly like fetch() except that it only returns
the text from the page, stripping out html tags and other
irrelevant data.
fetchform($URI)
---------------
This behaves exactly like fetch() except that it only returns
the form. elements from the page, stripping out html tags and other
irrelevant data.
fetchlinks($URI)
----------------
This behaves exactly like fetch() except that it only returns
the links from the page. By default, relative links are
converted to their fully qualified URL form.
submit($URI,$formvars)
----------------------
This submits a form. to the specified $URI. $formvars is an
array of the form. variables to pass.
submittext($URI,$formvars)
--------------------------
This behaves exactly like submit() except that it only returns
the text from the page, stripping out html tags and other
irrelevant data.
submitlinks($URI)
----------------
This behaves exactly like submit() except that it only returns
the links from the page. By default, relative links are
converted to their fully qualified URL form.
CLASS VARIABLES: (default value in parenthesis)
$host the host to connect to
$port the port to connect to
$proxy_host the proxy host to use, if any
$proxy_port the proxy port to use, if any
$agent the user agent to masqerade as (Snoopy v0.1)
$referer referer information to pass, if any
$cookies cookies to pass if any
$rawheaders other header info to pass, if any
$maxredirs maximum redirects to allow. 0=none allowed. (5)
$offsiteok whether or not to allow redirects off-site. (true)
$expandlinks whether or not to expand links to fully qualified URLs (true)
$user authentication username, if any
$pass authentication password, if any
$accept http accept types (image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*)
$error where errors are sent, if any
$response_code responde code returned from server
$headers headers returned from server
$maxlength max return data length
$read_timeout timeout on read operations (requires PHP 4 Beta 4+)
set to 0 to disallow timeouts
$timed_out true if a read operation timed out (requires PHP 4 Beta 4+)
$maxframes number of frames we will follow
$status http status of fetch
$temp_dir temp directory that the webserver can write to. (/tmp)
$curl_path system path to cURL binary, set to false if none
EXAMPLES:
Example: fetch a web page and display the return headers and
the contents of the page (html-escaped):
include "Snoopy.class.php";
$snoopy = new Snoopy;
$snoopy->user = "joe";
$snoopy->pass = "bloe";
if($snoopy->fetch("http://www.slashdot.org/"))
{
echo "response code: ".$snoopy->response_code."<br>\n";
while(list($key,$val) = each($snoopy->headers))
echo $key.": ".$val."<br>\n";
echo "<p>\n";
echo "<PRE>".htmlspecialchars($snoopy->results)."</PRE>\n";
}
else
echo "error fetching document: ".$snoopy->error."\n";
Example: submit a form. and print out the result headers
and html-escaped page:
include "Snoopy.class.php";
$snoopy = new Snoopy;
$submit_url = "http://lnk.ispi.net/texis/scripts/msearch/netsearch.html";
$submit_vars["q"] = "amiga";
$submit_vars["submit"] = "Search!";
$submit_vars["searchhost"] = "Altavista";
if($snoopy->submit($submit_url,$submit_vars))
{
while(list($key,$val) = each($snoopy->headers))
echo $key.": ".$val."<br>\n";
echo "<p>\n";
echo "<PRE>".htmlspecialchars($snoopy->results)."</PRE>\n";
}
else
echo "error fetching document: ".$snoopy->error."\n";
Example: showing functionality of all the variables:
include "Snoopy.class.php";
$snoopy = new Snoopy;
$snoopy->proxy_host = "my.proxy.host";
$snoopy->proxy_port = "8080";
$snoopy->agent = "(compatible; MSIE 4.01; MSN 2.5; AOL 4.0; Windows 98)";
$snoopy->referer = "http://www.microsnot.com/";
$snoopy->cookies["SessionID"] = 238472834723489l;
$snoopy->cookies["favoriteColor"] = "RED";
$snoopy->rawheaders["Pragma"] = "no-cache";
$snoopy->maxredirs = 2;
$snoopy->offsiteok = false;
$snoopy->expandlinks = false;
$snoopy->user = "joe";
$snoopy->pass = "bloe";
if($snoopy->fetchtext("http://www.phpbuilder.com"))
{
while(list($key,$val) = each($snoopy->headers))
echo $key.": ".$val."<br>\n";
echo "<p>\n";
echo "<PRE>".htmlspecialchars($snoopy->results)."</PRE>\n";
}
else
echo "error fetching document: ".$snoopy->error."\n";
Example: fetched framed content and display the results
include "Snoopy.class.php";
$snoopy = new Snoopy;
$snoopy->maxframes = 5;
if($snoopy->fetch("http://www.ispi.net/"))
{
echo "<PRE>".htmlspecialchars($snoopy->results[0])."</PRE>\n";
echo "<PRE>".htmlspecialchars($snoopy->results[1])."</PRE>\n";
echo "<PRE>".htmlspecialchars($snoopy->results[2])."</PRE>\n";
}
else
echo "error fetching document: ".$snoopy->error."\n";
- <?php
- //獲取所有內容url儲存到檔案
- function get_index($save_file, $prefix="index_"){
- $count = 68;
- $i = 1;
- if (file_exists($save_file)) @unlink($save_file);
- $fp = fopen($save_file, "a+") ordie("Open ". $save_file ." failed");
- while($i<$count){
- $url = $prefix . $i .".htm";
- echo"Get ". $url ."...";
- $url_str = get_content_url(get_url($url));
- echo" OKn";
- fwrite($fp, $url_str);
- ++$i;
- }
- fclose($fp);
- }
- //獲取目標多媒體物件
- function get_object($url_file, $save_file, $split="|--:**:--|"){
- if (!file_exists($url_file)) die($url_file ." not exist");
- $file_arr = file($url_file);
- if (!is_array($file_arr) || emptyempty($file_arr)) die($url_file ." not content");
- $url_arr = array_unique($file_arr);
- if (file_exists($save_file)) @unlink($save_file);
- $fp = fopen($save_file, "a+") ordie("Open save file ". $save_file ." failed");
- foreach($url_arras$url){
- if (emptyempty($url)) continue;
- echo"Get ". $url ."...";
- $html_str = get_url($url);
- echo$html_str;
- echo$url;
- exit;
- $obj_str = get_content_object($html_str);
- echo" OKn";
- fwrite($fp, $obj_str);
- }
- fclose($fp);
- }
- //遍歷目錄獲取檔案內容
- function get_dir($save_file, $dir){
- $dp = opendir($dir);
- if (file_exists($save_file)) @unlink($save_file);
-
$fp = fopen($save_file, "a+") ordie("Open save file ". $save_file
相關推薦
php抓取網頁內容彙總
①、使用php獲取網頁內容 http://hi.baidu.com/quqiufeng/blog/item/7e86fb3f40b598c67d1e7150.html header("Content-type: text/html; charset=utf-8")
php抓取網頁內容,獲取網頁資料
php通過simple_html_dom實現抓取網頁內容,獲取核心網頁資料,將網頁資料寫入本地 xxx.json 檔案 其程式碼實現邏輯: 1. 引入simple_html_dom.php檔案 require_once 'simple_ht
PHP抓取網頁內容獲得網頁原始碼
1、 file_get_contents獲取 <span style="white-space:pre"> </span>$url="http://www.baidu.com/"; <span style="white-space:pre"&g
php抓取網頁內容
function curl_file_get_contents($durl){ $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $durl); curl_setopt($ch, CURLOPT_TIMEOUT,
curl抓取網頁內容php
dem windows grep 網頁資源 網頁爬蟲 url 工具 () 動態獲取 1.cURL curl是客戶端向服務器請求資源的工具 2.cURL使用場景 網頁資源:網頁爬蟲 webservice數據接口資源:動態獲取接口數據 天氣 號碼歸屬地 ftp資源:下載ftp
【PHP-網頁內容抓取】抓取網頁內容的兩種常用方法
說到網頁內容的抓取,最常用的兩種方式: 1.利用file_get_contents()函式,簡簡單單; 2.CURL抓取工具。CURL是一個非常強大的開源庫,支援很多協議,包括HTTP、FTP、TEL
JAVA使用Gecco爬蟲 抓取網頁內容
log pro 指定 get www. error 一個 log4j java類 JAVA 爬蟲工具有挺多的,但是Gecco是一個挺輕量方便的工具。 先上項目結構圖。 這是一個 JAVASE的 MAVEN 項目,要添加包依賴,其他就四個文件。log4j.propertie
PHP爬取網頁內容
1.使用file_get_contents方法實現 $url = "http://www.baidu.com"; $html = file_get_contents($url); //如果出現中文亂碼使用下面程式碼 //$getcontent = iconv("
PHP抓取網頁執行JS phantomjs
PHP抓取網頁,網頁內容是通過JS載入的,這時需要執行JS來載入內容。 需要用到phantomjs。下面是windows的安裝方法。 1.安裝phantomjs 下載完成解壓到E:\softw
python 爬蟲 如何用selenium抓取網頁內容
使用selenium爬取動態網頁資訊 Python selenium自動控制瀏覽器對網頁的資料進行抓取,其中包含按鈕點選、跳轉頁面、搜尋框的輸入、頁面的價值資料儲存、mongodb自動id標識等等等。 首先介紹一下 Python selenium —自動化測試工
[Python]網路爬蟲(二):利用urllib2通過指定的URL抓取網頁內容
版本號:Python2.7.5,Python3改動較大,各位另尋教程。 所謂網頁抓取,就是把URL地址中指定的網路資源從網路流中讀取出來,儲存到本地。 類似於使用程式模擬IE瀏覽器的功能,把URL作為HTTP請求的內容傳送到伺服器端, 然後讀取伺服器端的響應資源。 在
PHP抓取頁面內容
什麼叫抓取? 通過PHP程式碼來實現,把其它網頁的內容抓取到本地,抓取的時候需要聯網才可以1.通過file_get_contents()函式實現抓取。 前提:在php.ini中設定允許開啟一個網路的url地址。
php 抓取div內容
1. 取得指定網頁內的所有圖片:測試 開新視窗複製程式碼列印? <?php //取得指定位址的內容,並儲存至text $text=file_get_contents('http://andy.diimii.com/'); //取得所有img標籤,
使用HttpComponents抓取網頁內容
匯入HttpComponents的包 下載地址 下載之後解壓,找到bin目錄,匯入這三個包就行 或者是使用maven <dependency> <groupId
php抓取遠端內容並儲存到本地
<?php function getImage($url,$save_dir='',$filename='',$type=1){ if(trim($url)==''){ return array('file_name'=>'', 'save_path'=>
python多執行緒抓取網頁內容並寫入MYSQL
自己的第一個多執行緒練習,中間踩了不少坑,程式寫的很渣,但是勉強能實現功能需求了 ,實際上抓取網頁是多執行緒在MYSQL寫入的時候是加了執行緒鎖的 ,實際上感覺就不是在多執行緒寫入了,不過作為第一個練習程式就這樣吧 ,後續部落格還會繼續更新優化版本。## htm
PHP抓取網頁-提取網頁meta鍵值對
一、前言 在使用php curl抓取網頁內容時,分析出來網頁內的meta資訊,一般情況下,只是會用到meta中的content-type或者charset、keywords、description。 二、實現 寫了一個函式,用來提出meta中的鍵值對,程式碼如下: fun
[Python]網路爬蟲(二):利用urllib通過指定的URL抓取網頁內容
1.基本方法 urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=F
java爬蟲(使用jsoup設定代理,抓取網頁內容)
jsoup 簡介 jsoup 是一款Java 的HTML解析器,可直接解析某個URL地址、HTML文字內容。它提供了一套非常省力的API,可通過DOM,CSS以及類似於jQuery的操作方法來
使用HttpClient遠端抓取網頁內容
準備工作 需要下載兩個jar包:commons-httpclient和commons-codes Demo: import java.io.FileOutputStream; import java.io.OutputStream; import java.io.PrintStream; i