python 實現CAS伺服器認證
阿新 • • 發佈:2019-01-07
##CAS登陸流程##。如https://my.oschina.net/aiguozhe/blog/160715中所示。
由於CAS不提供rest請求來通過認證。可行的方法是模擬瀏覽器請求,填入使用者名稱和密碼來實現認證流程。
一、通過fiddler抓取登陸過程報文:
Step1:
Request Header:
Response Header:302跳轉到認證頁面
Step2:
Request Header:
Response Header:
這裡上圖中的tgt並沒有返回到請求端。(流程圖有一個小問題)
Step3:
Request Header:
Response Header:
這裡認證通過後,生成CASTGC這個cookie,內容是sso伺服器快取的session的key。
並且注意到返回302跳轉到一開始需要登陸的地址。後面加上了ticket,(即ST用於CAS客戶端和伺服器認證)、
Step4:
由於302跳轉,繼續發出請求。Request Header:
Response Header:
ST認證通過之後,CAS會刪掉。最後會跳轉到最初訪問的頁面
-------前方高能預警-------
-------前方高能預警-------
-------前方高能預警-------
下圖是官網上最權威的流程圖
https://apereo.github.io/cas/4.1.x/images/cas_flow_diagram.png
二、通過Python來實現流程
# -*- coding: utf-8 -*- """ Spyder Editor This is a temporary script file. """ #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Fri Aug 10 14:52:52 2018 @author: kinghuang """ import requests import lxml.html import sys class AuthUtil: headers = { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "Accept-Encoding": "gzip, deflate", "Accept-Language": "zh-CN,zh;q=0.9", "Cache-Control": "max-age=0", "Connection": "keep-alive", "Content-Length": "162", "Content-Type": "application/x-www-form-urlencoded", "Cookie": "JSESSIONID=x5oK-NF_Z3QBLzDW4v8t3v2B.mgssoprdapp02", "Host": "mgsso.cloudytrace.com", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.84 Safari/537.36" } event_headers={ "Accept": "application/json, text/plain, */*", "Accept-Encoding": "gzip, deflate", "Accept-Language": "zh-CN,zh;q=0.9", "Connection": "keep-alive", "Cookie": "", "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.84 Safari/537.36" } data = { "username":"******", "password":"******" } def auth(self): ###STEP1#### url_login = 'http://mgsso.cloudytrace.com/login?service=http%3A%2F%2Falert.cloudytrace.com%2Fweb%2Findex.html' print ("begin to login ..") sesh = requests.session() req = sesh.get(url_login) html_content = req.text ###STEP2#### #parsing page for hidden inputs login_html = lxml.html.fromstring(html_content) hidden_inputs=login_html.xpath(r'//section/input[@type="hidden"]') user_form = {x.attrib["name"] : x.attrib["value"] for x in hidden_inputs} user_form["username"]=self.data['username']; user_form["password"]=self.data['password']; #print(f"---headers--={req.headers}") self.headers['Cookie'] = req.headers['Set-cookie'] responseRes=sesh.post(req.url, data=user_form, headers=self.headers) #有時候cas會繼續彈出登入頁面做認證 if self.findStr(responseRes.request.headers['Cookie'],'CASTGC')==False: responseRes=sesh.post(req.url, data=user_form, headers=self.headers) ###STEP3#### loginSuccess_headers = responseRes.request.headers #!!!注意這裡必須要這麼寫,是由於發生跳轉,這麼寫才能獲取帶有CASTGC的cookie!!! self.event_headers["Cookie"] =responseRes.request.headers["Cookie"]; print (f"---responseRes.request.headers---{responseRes.request.headers}") ''' print (f"---sesh---{sesh.headers}") print (f"text={responseRes.text}") print (f"statusCode={responseRes.status_code}") print (f"---self.headers---={self.headers}") print (f"current_response_header:{responseRes.headers}") print (f"---headers_result---:{headers_res}") ''' return self.event_headers,loginSuccess_headers def logout(self, headers): logout_url = 'http://mgsso.cloudytrace.com/logout' logout_req = requests.session() logout_req.get(logout_url,headers=headers) def findStr(self, source, target): return source.find(target) != -1 class EventCrawler: def crawEvent(self, headers): ###爬取的內容url## event_url = "http://alert.cloudytrace.com/event/query.htm?endTime=2018%2F08%2F12+17:28:40&pageNo=1&pageSize=10&startTime=2018%2F08%2F05+17:28:40&systemId="; req = requests.session() # print (f"---crawEvent--header={headers}") res = req.get(event_url,headers=headers); print(f"!res_text!={res.text}") if __name__=='__main__': auth = AuthUtil() headers = auth.auth() crawler_headers = headers[0] logout_headers = headers[1] EventCrawler().crawEvent(headers=crawler_headers) auth.logout(headers=logout_headers) # crawler = EventCrawler() # crawler.crawEvent(headers)
以上。
引用:
1、https://my.oschina.net/thinwonton/blog/1456722
2、https://my.oschina.net/aiguozhe/blog/160715