Lockdowns a Complete Failure compared to controls – Countries that did not? Python Analysis Part 2

Part 2 as promised. We compare cases and death per million from industrialized countries which did little to nothing to Great Britain and the United States. The Data extrapolated is from: https://ourworldindata.org/coronaviru…
#covid19 #lockdown #socialdistancing

(Volume is kind of Choppy midpoint)
Additional Code: From Part 1:
datasw = data.loc[data.iso_code==’SWE’, :]
datagb = data.loc[data.iso_code==’GBR’, :]
dataus = data.loc[data.iso_code==’USA’, :]
datasg = data.loc[data.iso_code==’SGP’, :]
datajp = data.loc[data.iso_code==’JPN’, :]
datako = data.loc[data.iso_code==’KOR’, :]
datatw = data.loc[data.iso_code==’TWN’, :]
dataall = [datagb,dataus,datasw,datasg,datajp,datako, datatw]
dataall = pd.concat(dataall)
dataall
dataall.datetime = pd.to_datetime(dataall.date)
dataall.set_index(‘date’, inplace=True)
fig, ax = plt.subplots(figsize=(50,25))
dataall.groupby(‘iso_code’)[‘new_cases_smoothed_per_million’].plot(legend=True,fontsize = 20, linewidth=7.0)
ax.legend([‘Great Britain = Lockdown’,’Japan = No LD’, ‘South Korea = No LD’, ‘Singapore = LD JUNE -Migrant LD HIghest POP Density’, ‘Sweden = No LD’, ‘Taiwan = No LD’,’USA = Lockdown’],prop=dict(size=50))
comp = dataall.loc[‘2020-09-18’]
comp.set_index(“iso_code”, inplace=True)
comp= pd.DataFrame(comp[[‘total_cases_per_million’,’total_deaths_per_million’]])
plt.rc(‘legend’, fontsize=50)
comp.plot.bar(rot=0, figsize=(20,20),fontsize=30)

Pandemic Over? COVID-19 World data Amateur Python Analysis

From an educational perspective, we review current COVID-19 data and arrive look at lockdowns and population density appears to have no numerical effect currently on COVID-19. In any case, this is more about exploring the code from a beginner’s standpoint with Python and DataFrames.
#covid19 #pandemicover #coviddata
CSV files found here:
https://ourworldindata.org/coronaviru…
Code: (Had to remove the angle brackets)
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import pandas as pd
from scipy.stats import spearmanr
from scipy.stats import kendalltau
from scipy.stats import pearsonr
from scipy import stats
import seaborn as sns
import warnings
warnings.filterwarnings(“ignore”)
#Pandemic
Claim Currently Invalid —Ralph Turchiano
data = pd.read_csv(‘owid-covid-data-19SEP2020.csv’)
data.info()
pd.set_option(‘max_columns’, None)
data.tail(5)
data[‘date’] = pd.to_datetime(data[‘date’])
data.info()
data_18SEP = data[data[‘date’]==’2020-09-18′]
data_ind = data_18SEP[data_18SEP[‘human_development_index’]=.8]
data_ind.head(10)
data_ind.drop([‘iso_code’,’continent’,’handwashing_facilities’,’stringency_index’,], axis=1, inplace=True)
data_ind.columns
data_ind[‘extreme_poverty’].fillna(0, inplace=True)
data_compare = pd.DataFrame([data.loc[37991],data.loc[41736]])
data_compare
data_compare.set_index(‘location’,inplace=True)
data_compare[‘total_cases_per_million’]
data_Swe_USA=pd.DataFrame(data_compare[[‘total_cases_per_million’,’new_cases_per_million’,’new_deaths_per_million’]])
data_Swe_USApd.DataFrame(data_compare[[‘total_cases_per_million’,’new_cases_per_million’,’new_deaths_per_million’]])
data_Swe_USA
data_ind.drop([‘date’,’new_cases’,’new_deaths’,’total_tests’, ‘total_tests_per_thousand’,
‘new_tests_per_thousand’, ‘new_tests_smoothed’, ‘new_tests’,
‘new_tests_smoothed_per_thousand’, ‘tests_per_case’,’tests_units’,’new_deaths_per_million’,’positive_rate’ ], axis=1, inplace=True)
data_ind.tail()
data_ind.dropna(inplace=True)
data_ind.corr(“kendall”)
data_18SEP.tail()
data_18SEP.loc[44310]
data_18SEP.loc[44310,[‘new_cases_smoothed_per_million’,’new_deaths_smoothed_per_million’]]
New =pd.DataFrame(data[[‘new_cases_smoothed_per_million’,’new_deaths_smoothed_per_million’]])
New.corr(‘kendall’)
dataw = data.loc[data[‘iso_code’] == ‘OWID_WRL’]
dataw
dataw.datetime = pd.to_datetime(data.date)
dataw.set_index(‘date’, inplace=True)
data_cl = pd.DataFrame(dataw[[‘new_deaths_smoothed’,’new_cases_smoothed’]])
data_cl.dropna(inplace=True)
data_cl.plot(figsize=(30,12))
data_cl.tail(20)

COVID-19 Tracking Data API and Data Anomalies (No Correlations? Cases to Hospitalizations Increases)

Is there a correlation between Positive cases and Hospitalizations? Below is the API for python access, open to all who desire to filter the data. I want to just give easy access to all the beginner students data scientists out there, such as myself..Explore and Discover: **My Apologies It says High Def, but does not look High Def on video here**

Code: import matplotlib.pyplot as plt import pandas as pd from scipy import stats import statsmodels.api as sm import requests import time from IPython.display import clear_output response = requests.get(“https://covidtracking.com/api/v1/us/daily.csv”) covid = response.content ccc = open(“daily.csv”,”wb”) ccc.write(covid) ccc.close() df = pd.read_csv(“daily.csv”, index_col = ‘date’) df.head() data = df[[‘positiveIncrease’,’hospitalizedIncrease’]] dataT = df[[‘positiveIncrease’,’hospitalizedIncrease’,’hospitalizedCurrently’]] dataD = df[[‘hospitalizedIncrease’,’deathIncrease’]] dataT.head(20) plt.figure(figsize=(20,10)) Y = data[‘positiveIncrease’] X = data[‘hospitalizedIncrease’] plt.scatter(X,Y) plt.ylabel(“Tested Positive Increase”) plt.xlabel(“Hospitalization Increase”) plt.show() Y1 = sm.add_constant(Y) reg = sm.OLS(X, Y1).fit() reg.summary() data.plot(y=[‘hospitalizedIncrease’,’positiveIncrease’],xticks=data.index[0:len(data):30], rot=90, figsize=(20,10) ) for x in range(len(data)): plt.figure(figsize=(20,10)) plt.xticks( data.index.values[0:len(data):30], rotation = 90, fontsize=20 ) plt.plot(data.tail(x))