1.小程序开发定制先安装以下库
- import requests
- from bs4 import BeautifulSoup as bs
- import pandas as pd
- from pandas import Series,DataFrame
2.爬取数据
2.1 小程序开发定制网站的内容
小程序开发定制主要是下方的天气情况
2.2小程序开发定制开始与网站获得连接
- headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.102 Safari/537.36 Edg/104.0.1293.63',
- 'Host':'lishi.tianqi.com',
- 'Accept-Encoding': "gzip, deflate",
- 'Connection': "keep-alive",
- 'cache-control': "no-cache"}
- url='https://lishi.tianqi.com/ganyu/202208.html'%输入你想爬取地方的数据
- resp= requests.request("GET", url, headers=headers)
- resp
当出现<Response[200]>时,此刻以与网站连接成功
2.3对网页进行解析
采用‘utf-8’来对爬去的信息进行解码,对网页解析用到BeautifulSoup库。
- resp.encoding = 'utf-8'
- soup = bs(resp.text,'html.parser')
这里有网页里所有的内容。我们需要从这里提取出我们想要的内容。我们回到要爬取的网页,按F12可以在Elements里面看到网页的源码。
了解过它的结构后,我们可以用BeautifulSoup里面的find和find_all来选取想要的内容。
- data_all=[]
- tian_three=soup.find("div",{"class":"tian_three"})
- lishitable_content=tian_three.find_all("li")
- for i in lishitable_content:
- lishi_div=i.find_all("div")
- data=[]
- for j in lishi_div:
- data.append(j.text)
- data_all.append(data)
可以看一下现在的data_all的样子
以为当天为 2022-08-21,所以当月数据到8月21日就截止了
3 数据的整理与存储
给每一列附上列名
- weather=pd.DataFrame(data_all)
- weather.columns=["当日信息","最高气温","最低气温","天气","风向"]
- weather_shape=weather.shape
- weather
爬取weather的结果显示
weather 表中当日信息为日期+星期,后期数据分析不大方便,所以要对数据处理
- weather['当日信息'].apply(str)
- result = DataFrame(weather['当日信息'].apply(lambda x:Series(str(x).split(' '))))
- result=result.loc[:,0:1]
- result.columns=['日期','星期']
- weather.join(result)
结果如下
如果数据没有太多要求,此处就可以保存了,在csv中更改一下行列。
如果有要求参考如下
- weather['当日信息'].apply(str)
- result = DataFrame(weather['当日信息'].apply(lambda x:Series(str(x).split(' '))))
- result=result.loc[:,0:1]
- result.columns=['日期','星期']
- weather['风向信息'].apply(str)
- result1 = DataFrame(weather['风向信息'].apply(lambda x:Series(str(x).split(' '))))
- result1=result1.loc[:,0:1]
- result1.columns=['风向','级数']
- weather=weather.drop(columns='当日信息')
- weather=weather.drop(columns='风向信息')
- weather.insert(loc=0,column='日期', value=result['日期'])
- weather.insert(loc=1,column='星期', value=result['星期'])
- weather.insert(loc=5,column='风向', value=result1['风向'])
- weather.insert(loc=6,column='级数', value=result1['级数'])
这个时候weather显示为:结果展示已经相当的漂亮了
最后就以csv格式直接保存文件了
weather.to_csv("XXX.csv",encoding="utf_8")
完整代码
- import requests
- from bs4 import BeautifulSoup as bs
- import pandas as pd
- from pandas import Series,DataFrame
- headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.102 Safari/537.36 Edg/104.0.1293.63',
- 'Host':'lishi.tianqi.com',
- 'Accept-Encoding': "gzip, deflate",
- 'Connection': "keep-alive",
- 'cache-control': "no-cache"}
- url='https://lishi.tianqi.com/ganyu/202208.html'
- resp= requests.request("GET", url, headers=headers)
- resp.encoding = 'utf-8'
- soup = bs(resp.text,'html.parser')
- data_all=[]
- tian_three=soup.find("div",{"class":"tian_three"})
- lishitable_content=tian_three.find_all("li")
- for i in lishitable_content:
- lishi_div=i.find_all("div")
- data=[]
- for j in lishi_div:
- data.append(j.text)
- data_all.append(data)
- weather=pd.DataFrame(data_all)
- weather.columns=["当日信息","最高气温","最低气温","天气","风向信息"]
- weather_shape=weather.shape
- weather['当日信息'].apply(str)
- result = DataFrame(weather['当日信息'].apply(lambda x:Series(str(x).split(' '))))
- result=result.loc[:,0:1]
- result.columns=['日期','星期']
- weather['风向信息'].apply(str)
- result1 = DataFrame(weather['风向信息'].apply(lambda x:Series(str(x).split(' '))))
- result1=result1.loc[:,0:1]
- result1.columns=['风向','级数']
- weather=weather.drop(columns='当日信息')
- weather=weather.drop(columns='风向信息')
- weather.insert(loc=0,column='日期', value=result['日期'])
- weather.insert(loc=1,column='星期', value=result['星期'])
- weather.insert(loc=5,column='风向', value=result1['风向'])
- weather.insert(loc=6,column='级数', value=result1['级数'])
- weather.to_csv("XX的天气.csv",encoding="utf_8")