picture

If you want to choose the most popular TV series in the near future , It must be 《 A home 》 Perhaps judge of particulars, , You may not have seen it in detail , But if you read microblog, you must have heard the name , This TV series has been on the microblog hot search list many times , It seems that there are still several times to rush to the top of the hot search list , The play mainly tells the story of real estate agents selling houses , The original name of the TV series is also a house seller .

Use Python Analyze this TV series , It mainly includes two steps : Getting data and analyzing data , Data sources we select 《 A home 》 Douban comments area data of .

get data

In Douban 《 A home 》 The address is :https://movie.douban.com/subject/30482003/, Let's open it and have a look , As shown in the figure below :

 picture

From the picture we can see that up to now there are 9 More than 10000 people scored , Judging from the score , The number of people playing three-star and four-star is in the majority , Overall rating 6.2 It's a passing grade , It's a bit of a formality .

We drop the page face down to the comment area , As shown in the figure below :

 picture

We can see that at present there are  3 More than 10000 comments , Douban's restrictions on viewing comment data are : You can view up to 200 strip , Login users can view at most 500 strip , Which means we can grab at most  500 Comments related information , The data items we want to capture include : The user nickname 、 The star 、 Comment on time 、 Comment content , After grabbing this information, we save it to csv   file , The code implementation is as follows :

import requests, time, random, pandas as pdfrom lxml import etree
def spider():    url = 'https://accounts.douban.com/j/mobile/login/basic'    headers = {"User-Agent": 'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)'}    # Home comments website , For dynamic page turning ,start Followed by a formatted number , There are 20 Data , Add per page 20 strip    url_comment = 'https://movie.douban.com/subject/30482003/comments?start=%d&limit=20&sort=new_score&status=P'    data = {        'ck': '',        'name': ' Own users ',        'password': ' Your own code ',        'remember': 'false',        'ticket': ''    }    session = requests.session()    session.post(url=url, headers=headers, data=data)    # initialization 4 individual list Save the user name separately 、 Comment on stars 、 Time 、 Comments    users = []    stars = []    times = []    content = []    # Grab 500 strip , each page 20 strip , This is also the upper limit given by Douban    for i in range(0, 500, 20):        # obtain HTML        data = session.get(url_comment % i, headers=headers)        # Status code 200 Watch is success        print(' The first ', i, ' page ', ' Status code :',data.status_code)        # Pause 0-1 Second time , prevent IP Be sealed up        time.sleep(random.random())        # analysis HTML        selector = etree.HTML(data.text)        # use xpath Get all comments on a single page        comments = selector.xpath('//div[@class="comment"]')        # Traverse all comments , Get details        for comment in comments:            # Get username            user = comment.xpath('.//h3/span[2]/a/text()')[0]            # Get reviews            star = comment.xpath('.//h3/span[2]/span[2]/@class')[0][7:8]            # Acquisition time            date_time = comment.xpath('.//h3/span[2]/span[3]/@title')            # Some time is empty , Need to judge            if len(date_time) != 0:                date_time = date_time[0]            else:                date_time = None            # Get comment text            comment_text = comment.xpath('.//p/span/text()')[0].strip()            # Add all information to the list            users.append(user)            stars.append(star)            times.append(date_time)            content.append(comment_text)    # Wrap in a dictionary    comment_dic = {'user': users, 'star': stars, 'time': times, 'comments': content}    # convert to DataFrame Format    comment_df = pd.DataFrame(comment_dic)    # Save the data    comment_df.to_csv('data.csv')    # Save the comments separately    comment_df['comments'].to_csv('comment.csv', index=False)

Analyze the data

Now the data is taken , We use Python Let's analyze the data .

Number of comments

First , Let's take a look at this 500 The number of comments per day , Then use the line chart to show the data , The code implementation is as follows :

import pandas as pd, matplotlib.pyplot as plt
csv_data = pd.read_csv('data.csv')df = pd.DataFrame(csv_data)df_gp = df.groupby(['time']).size()values = df_gp.values.tolist()index = df_gp.index.tolist()# Set canvas size plt.figure(figsize=(10, 6))# data plt.plot(index, values, label=' comments ')# Set the number tag for a, b in zip(index, values):    plt.text(a, b, b, ha='center', va='bottom', fontsize=13, color='black')plt.title(' Line chart of the number of comments over time ')plt.xticks(rotation=330)plt.tick_params(labelsize=10)plt.ylim(0, 200)plt.legend(loc='upper right')plt.show()

Take a look at the renderings :

 picture

As we can see from the picture  2 month 21、22 The most comments these two days , among 2 month 21 It's broadcast day , It's normal to have a lot of comments , 2 month 22   The number of comments on No , We can roughly speculate that it is the further spread of the network and other channels after the broadcast , And then the heat goes down over time , The number of comments is declining to a relatively stable level .

Role analysis

Let's go on to count the number of mentions of several main characters in the comment area , And then use the histogram to show the data , The code implementation is as follows :

import pandas as pd, jieba, matplotlib.pyplot as plt
csv_data = pd.read_csv('data.csv')roles = {' aunt ':0, ' Room is like a brocade ':0, ' prince ':0, ' Sparkling ':0, ' Deep fried dough sticks ':0, ' Loushanguan ':0, ' Fish turn into Dragons ':0}names = list(roles.keys())for name in names:    jieba.add_word(name)for row in csv_data['comments']:    row = str(row)    for name in names:        count = row.count(name)        roles[name] += countplt.figure(figsize=(8, 5))# data plt.bar(list(roles.keys()), list(roles.values()), width=0.5, label=' The number of mentions ', color=['g', 'r', 'dodgerblue', 'c', 'm', 'y', 'aquamarine'])# Set the number tag for a, b in zip(list(roles.keys()), list(roles.values())):    plt.text(a, b, b, ha='center', va='bottom', fontsize=13, color='black')plt.title(' Bar chart of the number of times a character is mentioned ')plt.xticks(rotation=270)plt.tick_params(labelsize=10)plt.ylim(0, 30)plt.legend(loc='upper right')plt.show()

Take a look at the renderings :

 picture

We can roughly infer the popularity of characters from the number of times they are mentioned .

Stars change

Next, let's take a look at the general trend of star change in recent days according to the obtained data , If there are many star reviews in a day, we can take the average value , The code implementation is as follows :

import pandas as pd, numpy as np, matplotlib.pyplot as plt
csv_data = pd.read_csv('data.csv')df_time = csv_data.groupby(['time']).size()df_star = csv_data.groupby(['star']).size()index = df_time.index.tolist()value = [0] * len(index)# Generate Dictionary dic = dict(zip(index, value))for k, v in dic.items():    stars = csv_data.loc[csv_data['time'] == str(k), 'star']    # Average    avg = np.mean(list(map(int, stars.values.tolist())))    dic[k] = round(avg ,2)# Set canvas size plt.figure(figsize=(9, 6))# data plt.plot(list(dic.keys()), list(dic.values()), label=' The star ')plt.title(' Stars change with time line chart ')plt.xticks(rotation=330)plt.tick_params(labelsize=10)plt.ylim(0, 5)plt.legend(loc='upper right')plt.show()

Take a look at the renderings :

 picture

From the available data ,《 A home 》 The overall star rating is maintained at 2 Star left and right , We can see that although the play is hot , But the audience's satisfaction with the play is not very high .

Ci cloud show

Last , We show the effect of word cloud for all comments , In this way, we can see more intuitively which words appear more frequently in the comment area , The implementation code is as follows :

from wordcloud import WordCloudimport numpy as np, jiebafrom PIL import Image
def jieba_():    # Open comment data file    content = open('comment.csv', 'rb').read()    # jieba participle    word_list = jieba.cut(content)    words = []    # Filtered words    remove_words = [' as well as ', ' Can't ', ' some ', ' that ', ' Only ',                    ' however ', ' thing ', ' This ', ' all ', ' such ',                    ' however ', ' Whole film ', ' One o'clock ', ' a ', ' One ',                    ' what ', ' although ', ' everything ', ' Looks like ', ' equally ',                    ' Can only ', ' No ', ' A kind of ', ' This ', ' in order to ']    for word in word_list:        if word not in remove_words:            words.append(word)    global word_cloud    # Separate words with commas    word_cloud = ','.join(words)
def cloud():    # Open the background image of the word cloud    cloud_mask = np.array(Image.open('bg.jpg'))    # Define some properties of word cloud    wc = WordCloud(        # The background image is divided into white        background_color='white',        # Background pattern        mask=cloud_mask,        # Show the maximum number of words        max_words=100,        # According to Chinese        font_path='./fonts/simhei.ttf',        # Maximum size        max_font_size=80    )    global word_cloud    # Word cloud function    x = wc.generate(word_cloud)    # Generate word cloud image    image = x.to_image()    # Show word cloud pictures    image.show()    # Save word cloud pictures    wc.to_file('anjia.png')
jieba_()cloud()

Take a look at the renderings :

 picture

summary

In this paper, through crawling the bean paste 《 A home 》 And visualize it , We can get a general idea of the audience's response to 《 A home 》 The general evaluation of this TV , Of course, because we have a limited number of samples , There may be more or less a little deviation from the actual evaluation of users .