Catalog

  • One 、Series data structure
    • 1.1 Series Support NumPy Module features ( Subscript )
    • 1.2 Series Features that support Dictionaries ( label )
    • 1.3 Series Missing data processing
  • Two 、DataFrame data structure
    • 2.1 Generate time object array :date_range
  • 3、 ... and 、DataFrame attribute
  • Four 、DataFrame Value
    • 4.1 adopt columns Value
    • 4.2 loc( Value by row label )
    • 4.3 iloc( Be similar to numpy Array value )
    • 4.4 Use logic to determine the value
  • 5、 ... and 、DataFrame Value substitution
  • 6、 ... and 、 Read CSV file
  • 7、 ... and 、 Processing lost data
  • 8、 ... and 、 Merge data
  • Nine 、 Import export data
    • 9.1 Read files, import data
    • 9.2 Write to file export data
  • Ten 、pandas Read json file
    • 10.1 orient Five forms of parameters
  • 11、 ... and 、pandas Read sql sentence


pandas Official documents :https://pandas.pydata.org/pandas-docs/stable/?v=20190307135750

pandas be based on Numpy, It can be seen as processing text or table data .pandas There are two main data structures in , among Series The data structure is similar to Numpy One dimensional array in ,DataFrame Similar to multidimensional table data structure .

pandas yes python The core module of data analysis . It mainly provides five functions :

  1. Support file access operations , Support database (sql)、html、json、pickle、csv(txt、excel)、sas、stata、hdf etc. .
  2. Support addition, deletion, modification and query 、 section 、 Higher order function 、 Group aggregation and other single table operations , And the dict、list Cross conversion .
  3. Multi table splicing and merging operation is supported .
  4. Support simple drawing operation .
  5. Support simple statistical analysis operation .

One 、Series data structure

Series Is an object similar to a one-dimensional array , Consists of a set of data and a set of data labels associated with it ( Indexes ) form .

Series It's more like a list ( Array ) And Dictionary

import numpy as np
import pandas as pd
df = pd.Series(0, index=['a', 'b', 'c', 'd'])
print(df)
a    0
b    0
c    0
d    0
dtype: int64
print(df.values)
[0 0 0 0]
print(df.index)
Index(['a', 'b', 'c', 'd'], dtype='object')

1.1 Series Support NumPy Module features ( Subscript )

Detailed explanation Method
from ndarray establish Series Series(arr)
And scalar operations df*2
Two Series operation df1+df2
Indexes df[0], df[[1,2,4]]
section df[0:2]
The generic function np.abs(df)
Boolean filtering df[df>0]
arr = np.array([1, 2, 3, 4, np.nan])
print(arr)
[ 1.  2.  3.  4. nan]
df = pd.Series(arr, index=['a', 'b', 'c', 'd', 'e'])
print(df)
a    1.0
b    2.0
c    3.0
d    4.0
e    NaN
dtype: float64
print(df**2)
a     1.0
b     4.0
c     9.0
d    16.0
e     NaN
dtype: float64
print(df[0])
1.0
print(df['a'])
1.0
print(df[[0, 1, 2]])
a    1.0
b    2.0
c    3.0
dtype: float64
print(df[0:2])
a    1.0
b    2.0
dtype: float64
np.sin(df)
a    0.841471
b    0.909297
c    0.141120
d   -0.756802
e         NaN
dtype: float64
df[df > 1]
b    2.0
c    3.0
d    4.0
dtype: float64

1.2 Series Features that support Dictionaries ( label )

Detailed explanation Method
Create... From the dictionary Series Series(dic),
in operation ’a’ in sr
Key index sr['a'], sr[['a', 'b', 'd']]
df = pd.Series({'a': 1, 'b': 2})
print(df)
a    1
b    2
dtype: int64
print('a' in df)
True
print(df['a'])
1

1.3 Series Missing data processing

Method Detailed explanation
dropna() Filtered value is NaN The line of
fillna() Fill in missing data
isnull() Returns a Boolean array , The missing value corresponds to True
notnull() Returns a Boolean array , The missing value corresponds to False
df = pd.Series([1, 2, 3, 4, np.nan], index=['a', 'b', 'c', 'd', 'e'])
print(df)
a    1.0
b    2.0
c    3.0
d    4.0
e    NaN
dtype: float64
print(df.dropna())
a    1.0
b    2.0
c    3.0
d    4.0
dtype: float64
print(df.fillna(5))
a    1.0
b    2.0
c    3.0
d    4.0
e    5.0
dtype: float64
print(df.isnull())
a    False
b    False
c    False
d    False
e     True
dtype: bool
print(df.notnull())
a     True
b     True
c     True
d     True
e    False
dtype: bool

Two 、DataFrame data structure

DataFrame It's a tabular data structure , Contains an ordered set of columns .

DataFrame Can be seen as by Series A dictionary made up of , And share an index .

2.1 Generate time object array :date_range

date_range Parameters, :

Parameters Detailed explanation
start Starting time
end End time
periods Length of time
freq Time frequency , The default is 'D', Optional H(our),W(eek),B(usiness),S(emi-)M(onth),(min)T(es), S(econd), A(year),…
dates = pd.date_range('20190101', periods=6, freq='M')
print(dates)
DatetimeIndex(['2019-01-31', '2019-02-28', '2019-03-31', '2019-04-30',
               '2019-05-31', '2019-06-30'],
              dtype='datetime64[ns]', freq='M')
np.random.seed(1)
arr = 10 * np.random.randn(6, 4)
print(arr)
[[ 16.24345364  -6.11756414  -5.28171752 -10.72968622]
 [  8.65407629 -23.01538697  17.44811764  -7.61206901]
 [  3.19039096  -2.49370375  14.62107937 -20.60140709]
 [ -3.22417204  -3.84054355  11.33769442 -10.99891267]
 [ -1.72428208  -8.77858418   0.42213747   5.82815214]
 [-11.00619177  11.4472371    9.01590721   5.02494339]]
df = pd.DataFrame(arr, index=dates, columns=['c1', 'c2', 'c3', 'c4'])
df

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943

3、 ... and 、DataFrame attribute

attribute Detailed explanation
dtype yes View data type
index Look at the row sequence or index
columns Check the labels of the columns
values View the data in the data box , That is, the data without the header index
describe Look at the extremes of each column of data , mean value , Median , Only for numerical data
transpose Transposition , Also available T To operate
sort_index Sort , By row or column index Sort output
sort_values Sort by data value
#  View data type
print(df2.dtypes)
0    float64
1    float64
2    float64
3    float64
dtype: object
df

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943
print(df.index)
DatetimeIndex(['2019-01-31', '2019-02-28', '2019-03-31', '2019-04-30',
               '2019-05-31', '2019-06-30'],
              dtype='datetime64[ns]', freq='M')
print(df.columns)
Index(['c1', 'c2', 'c3', 'c4'], dtype='object')
print(df.values)
[[ 16.24345364  -6.11756414  -5.28171752 -10.72968622]
 [  8.65407629 -23.01538697  17.44811764  -7.61206901]
 [  3.19039096  -2.49370375  14.62107937 -20.60140709]
 [ -3.22417204  -3.84054355  11.33769442 -10.99891267]
 [ -1.72428208  -8.77858418   0.42213747   5.82815214]
 [-11.00619177  11.4472371    9.01590721   5.02494339]]
df.describe()

c1 c2 c3 c4
count 6.000000 6.000000 6.000000 6.000000
mean 2.022213 -5.466424 7.927203 -6.514830
std 9.580084 11.107772 8.707171 10.227641
min -11.006192 -23.015387 -5.281718 -20.601407
25% -2.849200 -8.113329 2.570580 -10.931606
50% 0.733054 -4.979054 10.176801 -9.170878
75% 7.288155 -2.830414 13.800233 1.865690
max 16.243454 11.447237 17.448118 5.828152
df.T

2019-01-31 00:00:00 2019-02-28 00:00:00 2019-03-31 00:00:00 2019-04-30 00:00:00 2019-05-31 00:00:00 2019-06-30 00:00:00
c1 16.243454 8.654076 3.190391 -3.224172 -1.724282 -11.006192
c2 -6.117564 -23.015387 -2.493704 -3.840544 -8.778584 11.447237
c3 -5.281718 17.448118 14.621079 11.337694 0.422137 9.015907
c4 -10.729686 -7.612069 -20.601407 -10.998913 5.828152 5.024943
#  Label by line [c1, c2, c3, c4] Sort from large to small
df.sort_index(axis=0)

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943
#  Label by column [2019-01-01, 2019-01-02...] Sort from large to small
df.sort_index(axis=1)

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943
#  Press c2 Sort the values of columns from large to small
df.sort_values(by='c2')

c1 c2 c3 c4
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
2019-06-30 -11.006192 11.447237 9.015907 5.024943

Four 、DataFrame Value

df

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943

4.1 adopt columns Value

df['c2']
2019-01-31    -6.117564
2019-02-28   -23.015387
2019-03-31    -2.493704
2019-04-30    -3.840544
2019-05-31    -8.778584
2019-06-30    11.447237
Freq: M, Name: c2, dtype: float64
df[['c2', 'c3']]

c2 c3
2019-01-31 -6.117564 -5.281718
2019-02-28 -23.015387 17.448118
2019-03-31 -2.493704 14.621079
2019-04-30 -3.840544 11.337694
2019-05-31 -8.778584 0.422137
2019-06-30 11.447237 9.015907

4.2 loc( Value by row label )

#  Select data through a custom row label
df.loc['2019-01-01':'2019-01-03']
df[0:3]

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407

4.3 iloc( Be similar to numpy Array value )

df.values
array([[ 16.24345364,  -6.11756414,  -5.28171752, -10.72968622],
       [  8.65407629, -23.01538697,  17.44811764,  -7.61206901],
       [  3.19039096,  -2.49370375,  14.62107937, -20.60140709],
       [ -3.22417204,  -3.84054355,  11.33769442, -10.99891267],
       [ -1.72428208,  -8.77858418,   0.42213747,   5.82815214],
       [-11.00619177,  11.4472371 ,   9.01590721,   5.02494339]])
#  Select data by row index
print(df.iloc[2, 1])
-2.493703754774101
df.iloc[1:4, 1:4]

c2 c3 c4
2019-02-28 -23.015387 17.448118 -7.612069
2019-03-31 -2.493704 14.621079 -20.601407
2019-04-30 -3.840544 11.337694 -10.998913

4.4 Use logic to determine the value

df[df['c1'] > 0]

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
df[(df['c1'] > 0) & (df['c2'] > -8)]

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-03-31 3.190391 -2.493704 14.621079 -20.601407

5、 ... and 、DataFrame Value substitution

df

c1 c2 c3 c4
2019-01-31 16.243454 -6.117564 -5.281718 -10.729686
2019-02-28 8.654076 -23.015387 17.448118 -7.612069
2019-03-31 3.190391 -2.493704 14.621079 -20.601407
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943
df.iloc[0:3, 0:2] = 0
df

c1 c2 c3 c4
2019-01-31 0.000000 0.000000 -5.281718 -10.729686
2019-02-28 0.000000 0.000000 17.448118 -7.612069
2019-03-31 0.000000 0.000000 14.621079 -20.601407
2019-04-30 -3.224172 -3.840544 11.337694 -10.998913
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943
df['c3'] > 10
2019-01-31    False
2019-02-28     True
2019-03-31     True
2019-04-30     True
2019-05-31    False
2019-06-30    False
Freq: M, Name: c3, dtype: bool
#  Deal with lines
df[df['c3'] > 10] = 100
df

c1 c2 c3 c4
2019-01-31 0.000000 0.000000 -5.281718 -10.729686
2019-02-28 100.000000 100.000000 100.000000 100.000000
2019-03-31 100.000000 100.000000 100.000000 100.000000
2019-04-30 100.000000 100.000000 100.000000 100.000000
2019-05-31 -1.724282 -8.778584 0.422137 5.828152
2019-06-30 -11.006192 11.447237 9.015907 5.024943
#  Deal with lines
df = df.astype(np.int32)
df[df['c3'].isin([100])] = 1000
df

c1 c2 c3 c4
2019-01-31 0 0 -5 -10
2019-02-28 1000 1000 1000 1000
2019-03-31 1000 1000 1000 1000
2019-04-30 1000 1000 1000 1000
2019-05-31 -1 -8 0 5
2019-06-30 -11 11 9 5

6、 ... and 、 Read CSV file

import pandas as pd
from io import StringIO
test_data = '''
5.1,,1.4,0.2
4.9,3.0,1.4,0.2
4.7,3.2,,0.2
7.0,3.2,4.7,1.4
6.4,3.2,4.5,1.5
6.9,3.1,4.9,
,,,
'''
test_data = StringIO(test_data)
df = pd.read_csv(test_data, header=None)
df.columns = ['c1', 'c2', 'c3', 'c4']
df

c1 c2 c3 c4
0 5.1 NaN 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 NaN 0.2
3 7.0 3.2 4.7 1.4
4 6.4 3.2 4.5 1.5
5 6.9 3.1 4.9 NaN
6 NaN NaN NaN NaN

7、 ... and 、 Processing lost data

df.isnull()

c1 c2 c3 c4
0 False True False False
1 False False False False
2 False False True False
3 False False False False
4 False False False False
5 False False False True
6 True True True True
#  By means of isnull() Method after use sum() Method can get how many missing values a feature of the data set contains
print(df.isnull().sum())
c1    1
c2    2
c3    2
c4    2
dtype: int64
# axis=0 Delete yes NaN Row of values
df.dropna(axis=0)

c1 c2 c3 c4
1 4.9 3.0 1.4 0.2
3 7.0 3.2 4.7 1.4
4 6.4 3.2 4.5 1.5
# axis=1 Delete yes NaN Columns of values
df.dropna(axis=1)

0
1
2
3
4
5
6
#  Delete all as NaN Worth row or column
df.dropna(how='all')

c1 c2 c3 c4
0 5.1 NaN 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 NaN 0.2
3 7.0 3.2 4.7 1.4
4 6.4 3.2 4.5 1.5
5 6.9 3.1 4.9 NaN
#  Delete line not for 4 Of values
df.dropna(thresh=4)

c1 c2 c3 c4
1 4.9 3.0 1.4 0.2
3 7.0 3.2 4.7 1.4
4 6.4 3.2 4.5 1.5
#  Delete c2 There is NaN Row of values
df.dropna(subset=['c2'])

c1 c2 c3 c4
1 4.9 3.0 1.4 0.2
2 4.7 3.2 NaN 0.2
3 7.0 3.2 4.7 1.4
4 6.4 3.2 4.5 1.5
5 6.9 3.1 4.9 NaN
#  fill nan value
df.fillna(value=10)

c1 c2 c3 c4
0 5.1 10.0 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 10.0 0.2
3 7.0 3.2 4.7 1.4
4 6.4 3.2 4.5 1.5
5 6.9 3.1 4.9 10.0
6 10.0 10.0 10.0 10.0

8、 ... and 、 Merge data

df1 = pd.DataFrame(np.zeros((3, 4)))
df1

0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
df2 = pd.DataFrame(np.ones((3, 4)))
df2

0 1 2 3
0 1.0 1.0 1.0 1.0
1 1.0 1.0 1.0 1.0
2 1.0 1.0 1.0 1.0
# axis=0 Merge Columns
pd.concat((df1, df2), axis=0)

0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
0 1.0 1.0 1.0 1.0
1 1.0 1.0 1.0 1.0
2 1.0 1.0 1.0 1.0
# axis=1 Merger line
pd.concat((df1, df2), axis=1)

0 1 2 3 0 1 2 3
0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
1 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
2 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0
# append You can only merge Columns
df1.append(df2)

0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
0 1.0 1.0 1.0 1.0
1 1.0 1.0 1.0 1.0
2 1.0 1.0 1.0 1.0

Nine 、 Import export data

Use df = pd.read_excel(filename) Read the file , Use df.to_excel(filename) Save the file .

9.1 Read files, import data

Read file import data function main parameters :

Parameters Detailed explanation
sep Specify the separator , Regular expressions such as '\s+'
header=None The specified file has no line name
name Specifies the column name
index_col Specify a column as an index
skip_row Specifies to skip certain lines
na_values Specifies that some strings represent missing values
parse_dates Specifies whether some columns are resolved to date , Boolean or list
df = pd.read_excel(filename)
df = pd.read_csv(filename)

9.2 Write to file export data

The main parameters of the write file function :

Parameters Detailed explanation
sep Separator
na_rep Specifies the string for the missing value conversion , The default is an empty string
header=False Don't save column names
index=False Do not save row index
cols Specify the columns for the output , Incoming list
df.to_excel(filename)

Ten 、pandas Read json file

strtext = '[{"ttery":"min","issue":"20130801-3391","code":"8,4,5,2,9","code1":"297734529","code2":null,"time":1013395466000},\
{"ttery":"min","issue":"20130801-3390","code":"7,8,2,1,2","code1":"298058212","code2":null,"time":1013395406000},\
{"ttery":"min","issue":"20130801-3389","code":"5,9,1,2,9","code1":"298329129","code2":null,"time":1013395346000},\
{"ttery":"min","issue":"20130801-3388","code":"3,8,7,3,3","code1":"298588733","code2":null,"time":1013395286000},\
{"ttery":"min","issue":"20130801-3387","code":"0,8,5,2,7","code1":"298818527","code2":null,"time":1013395226000}]'
df = pd.read_json(strtext, orient='records')
df

code code1 code2 issue time ttery
0 8,4,5,2,9 297734529 NaN 20130801-3391 1013395466000 min
1 7,8,2,1,2 298058212 NaN 20130801-3390 1013395406000 min
2 5,9,1,2,9 298329129 NaN 20130801-3389 1013395346000 min
3 3,8,7,3,3 298588733 NaN 20130801-3388 1013395286000 min
4 0,8,5,2,7 298818527 NaN 20130801-3387 1013395226000 min
df.to_excel('pandas Handle json.xlsx',
            index=False,
            columns=["ttery", "issue", "code", "code1", "code2", "time"])

10.1 orient Five forms of parameters

orient It's a sign of expectation json String format .orient There are five values for :

1.'split' : dict like {index -> [index], columns -> [columns], data -> [values]}

This is indexed , There are column fields , And data matrix json Format .key The name can only be index,columns and data.

s = '{"index":[1,2,3],"columns":["a","b"],"data":[[1,3],[2,8],[3,9]]}'
df = pd.read_json(s, orient='split')
df

a b
1 1 3
2 2 8
3 3 9

2.'records' : list like [{column -> value}, ... , {column -> value}]

This is a list of dictionaries with members . As I'm going to deal with today json The data example shows . The composition is that the column field is the key , The value is the key value , Each member of the dictionary constitutes dataframe A row of data .

strtext = '[{"ttery":"min","issue":"20130801-3391","code":"8,4,5,2,9","code1":"297734529","code2":null,"time":1013395466000},\
{"ttery":"min","issue":"20130801-3390","code":"7,8,2,1,2","code1":"298058212","code2":null,"time":1013395406000}]'
df = pd.read_json(strtext, orient='records')
df

code code1 code2 issue time ttery
0 8,4,5,2,9 297734529 NaN 20130801-3391 1013395466000 min
1 7,8,2,1,2 298058212 NaN 20130801-3390 1013395406000 min

3.'index' : dict like {index -> {column -> value}}

Based on the index key, Takes the dictionary of column fields as the key value . Such as :

s = '{"0":{"a":1,"b":2},"1":{"a":9,"b":11}}'
df = pd.read_json(s, orient='index')
df

a b
0 1 2
1 9 11

4.'columns' : dict like {column -> {index -> value}}

This kind of processing is based on the column as the key , Object corresponding to a value Dictionary . The dictionary object is indexed , With value as the key value json character string . As shown in the figure below :

s = '{"a":{"0":1,"1":9},"b":{"0":2,"1":11}}'
df = pd.read_json(s, orient='columns')
df

a b
0 1 2
1 9 11

5.'values' : just the values array.

values This is very common to us . It's just a nested list . The members are also lists ,2 Layer of .

s = '[["a",1],["b",2]]'
df = pd.read_json(s, orient='values')
df

0 1
0 a 1
1 b 2

11、 ... and 、pandas Read sql sentence

import numpy as np
import pandas as pd
import pymysql
def conn(sql):
    #  Connect to mysql database
    conn = pymysql.connect(
        host="localhost",
        port=3306,
        user="root",
        passwd="123",
        db="db1",
    )
    try:
        data = pd.read_sql(sql, con=conn)
        return data
    except Exception as e:
        print("SQL is not correct!")
    finally:
        conn.close()
sql = "select * from test1 limit 0, 10"  # sql sentence
data = conn(sql)
print(data.columns.tolist())  #  View fields
print(data)  #  View the data