Web scrapping with beautifulSoup is done slowly Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election Results
When do you get frequent flier miles - when you buy, or when you fly?
What causes the vertical darker bands in my photo?
Output the ŋarâþ crîþ alphabet song without using (m)any letters
Sci-Fi book where patients in a coma ward all live in a subconscious world linked together
Why did the Falcon Heavy center core fall off the ASDS OCISLY barge?
Why do we bend a book to keep it straight?
Dating a Former Employee
What's the purpose of writing one's academic biography in the third person?
English words in a non-english sci-fi novel
porting install scripts : can rpm replace apt?
Error "illegal generic type for instanceof" when using local classes
51k Euros annually for a family of 4 in Berlin: Is it enough?
Echoing a tail command produces unexpected output?
ListPlot join points by nearest neighbor rather than order
Storing hydrofluoric acid before the invention of plastics
List of Python versions
Is it ethical to give a final exam after the professor has quit before teaching the remaining chapters of the course?
How discoverable are IPv6 addresses and AAAA names by potential attackers?
How do I keep my slimes from escaping their pens?
Identify plant with long narrow paired leaves and reddish stems
What does an IRS interview request entail when called in to verify expenses for a sole proprietor small business?
Can a non-EU citizen traveling with me come with me through the EU passport line?
What exactly is a "Meth" in Altered Carbon?
Apollo command module space walk?
Web scrapping with beautifulSoup is done slowly
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election Results
$begingroup$
I have developed a web scrapping code in Python which takes data from Hattrick.org's matches and returns them in a table so it can be mined, determined likelihood of goals, etc.
I have the difficult that is really slow, returning 20.000 rows in 5 hours or so.
This question is to ask if there is a way to improve the web scrapping technique so it does not take that amount of time.
This is the code in Python.
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
import numpy as np
ini = 631163587
q = 200000 # Change to q = 10 to try a sample
Cols = '01. Local MF',
'02. Away MF',
'03. Local RD',
'04. Away RD',
'05. Local CD',
'06. Away CD',
'07. Local LD',
'08. Away LD',
'09. Local RA',
'10. Away RA',
'11. Local CA',
'12. Away CA',
'13. Local LA',
'14. Away LA',
'15. Local IndD',
'16. Away IndD',
'17. Local IndA',
'18. Away IndA',
'19. Local Attitude',
'20. Away Attitude',
'21. Local Tactic',
'22. Away Tactic',
'23. Local Tactic Level',
'24. Away Tactic Level',
'25. Local Score',
'26. Away Score'
df_ht = pd.DataFrame(data=np.nan,index=range(ini,ini+q),columns=Cols)
cont=[]
for i in range(ini,ini+q):
url2 = 'https://www74.hattrick.org/Club/Matches/Match.aspx?matchID='+str(i)
response = requests.get(url2)
soup = BeautifulSoup(response.text, 'html.parser')
s1 = soup.findAll('td')
m = soup.findAll('meta')[10].attrs['content']
d = re.findall('[ ,.,A-Z,a-z,0-9]* - [., ,A-Z,a-z,0-9]*',m)
d2 = re.findall('[0-9]+',d[1])
partido = d[0]
try:
D = '01. Local MF': float(s1[3].contents[0]),
'02. Away MF': float(s1[4].contents[0]),
'03. Local RD': float(s1[10].contents[0]),
'04. Away RD': float(s1[11].contents[0]),
'05. Local CD': float(s1[17].contents[0]),
'06. Away CD': float(s1[18].contents[0]),
'07. Local LD': float(s1[24].contents[0]),
'08. Away LD': float(s1[25].contents[0]),
'09. Local RA': float(s1[31].contents[0]),
'10. Away RA': float(s1[32].contents[0]),
'11. Local CA': float(s1[38].contents[0]),
'12. Away CA': float(s1[39].contents[0]),
'13. Local LA': float(s1[45].contents[0]),
'14. Away LA': float(s1[46].contents[0]),
'15. Local IndD': float(s1[54].contents[0]),
'16. Away IndD': float(s1[55].contents[0]),
'17. Local IndA': float(s1[61].contents[0]),
'18. Away IndA': float(s1[62].contents[0]),
'19. Local Attitude': (s1[67].contents[0]),
'20. Away Attitude': (s1[68].contents[0]),
'21. Local Tactic': s1[70].contents[0],
'22. Away Tactic': s1[71].contents[0],
'23. Local Tactic Level': s1[75].contents[0],
'24. Away Tactic Level': s1[76].contents[0],
'25. Local Score': float(d2[0]),
'26. Away Score': float(d2[1])
df_ht.loc[i,:] = D
except:
cont.append(i)
df_ht.to_csv(r"Datos9.csv")
web-scrapping
New contributor
$endgroup$
add a comment |
$begingroup$
I have developed a web scrapping code in Python which takes data from Hattrick.org's matches and returns them in a table so it can be mined, determined likelihood of goals, etc.
I have the difficult that is really slow, returning 20.000 rows in 5 hours or so.
This question is to ask if there is a way to improve the web scrapping technique so it does not take that amount of time.
This is the code in Python.
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
import numpy as np
ini = 631163587
q = 200000 # Change to q = 10 to try a sample
Cols = '01. Local MF',
'02. Away MF',
'03. Local RD',
'04. Away RD',
'05. Local CD',
'06. Away CD',
'07. Local LD',
'08. Away LD',
'09. Local RA',
'10. Away RA',
'11. Local CA',
'12. Away CA',
'13. Local LA',
'14. Away LA',
'15. Local IndD',
'16. Away IndD',
'17. Local IndA',
'18. Away IndA',
'19. Local Attitude',
'20. Away Attitude',
'21. Local Tactic',
'22. Away Tactic',
'23. Local Tactic Level',
'24. Away Tactic Level',
'25. Local Score',
'26. Away Score'
df_ht = pd.DataFrame(data=np.nan,index=range(ini,ini+q),columns=Cols)
cont=[]
for i in range(ini,ini+q):
url2 = 'https://www74.hattrick.org/Club/Matches/Match.aspx?matchID='+str(i)
response = requests.get(url2)
soup = BeautifulSoup(response.text, 'html.parser')
s1 = soup.findAll('td')
m = soup.findAll('meta')[10].attrs['content']
d = re.findall('[ ,.,A-Z,a-z,0-9]* - [., ,A-Z,a-z,0-9]*',m)
d2 = re.findall('[0-9]+',d[1])
partido = d[0]
try:
D = '01. Local MF': float(s1[3].contents[0]),
'02. Away MF': float(s1[4].contents[0]),
'03. Local RD': float(s1[10].contents[0]),
'04. Away RD': float(s1[11].contents[0]),
'05. Local CD': float(s1[17].contents[0]),
'06. Away CD': float(s1[18].contents[0]),
'07. Local LD': float(s1[24].contents[0]),
'08. Away LD': float(s1[25].contents[0]),
'09. Local RA': float(s1[31].contents[0]),
'10. Away RA': float(s1[32].contents[0]),
'11. Local CA': float(s1[38].contents[0]),
'12. Away CA': float(s1[39].contents[0]),
'13. Local LA': float(s1[45].contents[0]),
'14. Away LA': float(s1[46].contents[0]),
'15. Local IndD': float(s1[54].contents[0]),
'16. Away IndD': float(s1[55].contents[0]),
'17. Local IndA': float(s1[61].contents[0]),
'18. Away IndA': float(s1[62].contents[0]),
'19. Local Attitude': (s1[67].contents[0]),
'20. Away Attitude': (s1[68].contents[0]),
'21. Local Tactic': s1[70].contents[0],
'22. Away Tactic': s1[71].contents[0],
'23. Local Tactic Level': s1[75].contents[0],
'24. Away Tactic Level': s1[76].contents[0],
'25. Local Score': float(d2[0]),
'26. Away Score': float(d2[1])
df_ht.loc[i,:] = D
except:
cont.append(i)
df_ht.to_csv(r"Datos9.csv")
web-scrapping
New contributor
$endgroup$
add a comment |
$begingroup$
I have developed a web scrapping code in Python which takes data from Hattrick.org's matches and returns them in a table so it can be mined, determined likelihood of goals, etc.
I have the difficult that is really slow, returning 20.000 rows in 5 hours or so.
This question is to ask if there is a way to improve the web scrapping technique so it does not take that amount of time.
This is the code in Python.
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
import numpy as np
ini = 631163587
q = 200000 # Change to q = 10 to try a sample
Cols = '01. Local MF',
'02. Away MF',
'03. Local RD',
'04. Away RD',
'05. Local CD',
'06. Away CD',
'07. Local LD',
'08. Away LD',
'09. Local RA',
'10. Away RA',
'11. Local CA',
'12. Away CA',
'13. Local LA',
'14. Away LA',
'15. Local IndD',
'16. Away IndD',
'17. Local IndA',
'18. Away IndA',
'19. Local Attitude',
'20. Away Attitude',
'21. Local Tactic',
'22. Away Tactic',
'23. Local Tactic Level',
'24. Away Tactic Level',
'25. Local Score',
'26. Away Score'
df_ht = pd.DataFrame(data=np.nan,index=range(ini,ini+q),columns=Cols)
cont=[]
for i in range(ini,ini+q):
url2 = 'https://www74.hattrick.org/Club/Matches/Match.aspx?matchID='+str(i)
response = requests.get(url2)
soup = BeautifulSoup(response.text, 'html.parser')
s1 = soup.findAll('td')
m = soup.findAll('meta')[10].attrs['content']
d = re.findall('[ ,.,A-Z,a-z,0-9]* - [., ,A-Z,a-z,0-9]*',m)
d2 = re.findall('[0-9]+',d[1])
partido = d[0]
try:
D = '01. Local MF': float(s1[3].contents[0]),
'02. Away MF': float(s1[4].contents[0]),
'03. Local RD': float(s1[10].contents[0]),
'04. Away RD': float(s1[11].contents[0]),
'05. Local CD': float(s1[17].contents[0]),
'06. Away CD': float(s1[18].contents[0]),
'07. Local LD': float(s1[24].contents[0]),
'08. Away LD': float(s1[25].contents[0]),
'09. Local RA': float(s1[31].contents[0]),
'10. Away RA': float(s1[32].contents[0]),
'11. Local CA': float(s1[38].contents[0]),
'12. Away CA': float(s1[39].contents[0]),
'13. Local LA': float(s1[45].contents[0]),
'14. Away LA': float(s1[46].contents[0]),
'15. Local IndD': float(s1[54].contents[0]),
'16. Away IndD': float(s1[55].contents[0]),
'17. Local IndA': float(s1[61].contents[0]),
'18. Away IndA': float(s1[62].contents[0]),
'19. Local Attitude': (s1[67].contents[0]),
'20. Away Attitude': (s1[68].contents[0]),
'21. Local Tactic': s1[70].contents[0],
'22. Away Tactic': s1[71].contents[0],
'23. Local Tactic Level': s1[75].contents[0],
'24. Away Tactic Level': s1[76].contents[0],
'25. Local Score': float(d2[0]),
'26. Away Score': float(d2[1])
df_ht.loc[i,:] = D
except:
cont.append(i)
df_ht.to_csv(r"Datos9.csv")
web-scrapping
New contributor
$endgroup$
I have developed a web scrapping code in Python which takes data from Hattrick.org's matches and returns them in a table so it can be mined, determined likelihood of goals, etc.
I have the difficult that is really slow, returning 20.000 rows in 5 hours or so.
This question is to ask if there is a way to improve the web scrapping technique so it does not take that amount of time.
This is the code in Python.
import requests
from bs4 import BeautifulSoup
import re
import pandas as pd
import numpy as np
ini = 631163587
q = 200000 # Change to q = 10 to try a sample
Cols = '01. Local MF',
'02. Away MF',
'03. Local RD',
'04. Away RD',
'05. Local CD',
'06. Away CD',
'07. Local LD',
'08. Away LD',
'09. Local RA',
'10. Away RA',
'11. Local CA',
'12. Away CA',
'13. Local LA',
'14. Away LA',
'15. Local IndD',
'16. Away IndD',
'17. Local IndA',
'18. Away IndA',
'19. Local Attitude',
'20. Away Attitude',
'21. Local Tactic',
'22. Away Tactic',
'23. Local Tactic Level',
'24. Away Tactic Level',
'25. Local Score',
'26. Away Score'
df_ht = pd.DataFrame(data=np.nan,index=range(ini,ini+q),columns=Cols)
cont=[]
for i in range(ini,ini+q):
url2 = 'https://www74.hattrick.org/Club/Matches/Match.aspx?matchID='+str(i)
response = requests.get(url2)
soup = BeautifulSoup(response.text, 'html.parser')
s1 = soup.findAll('td')
m = soup.findAll('meta')[10].attrs['content']
d = re.findall('[ ,.,A-Z,a-z,0-9]* - [., ,A-Z,a-z,0-9]*',m)
d2 = re.findall('[0-9]+',d[1])
partido = d[0]
try:
D = '01. Local MF': float(s1[3].contents[0]),
'02. Away MF': float(s1[4].contents[0]),
'03. Local RD': float(s1[10].contents[0]),
'04. Away RD': float(s1[11].contents[0]),
'05. Local CD': float(s1[17].contents[0]),
'06. Away CD': float(s1[18].contents[0]),
'07. Local LD': float(s1[24].contents[0]),
'08. Away LD': float(s1[25].contents[0]),
'09. Local RA': float(s1[31].contents[0]),
'10. Away RA': float(s1[32].contents[0]),
'11. Local CA': float(s1[38].contents[0]),
'12. Away CA': float(s1[39].contents[0]),
'13. Local LA': float(s1[45].contents[0]),
'14. Away LA': float(s1[46].contents[0]),
'15. Local IndD': float(s1[54].contents[0]),
'16. Away IndD': float(s1[55].contents[0]),
'17. Local IndA': float(s1[61].contents[0]),
'18. Away IndA': float(s1[62].contents[0]),
'19. Local Attitude': (s1[67].contents[0]),
'20. Away Attitude': (s1[68].contents[0]),
'21. Local Tactic': s1[70].contents[0],
'22. Away Tactic': s1[71].contents[0],
'23. Local Tactic Level': s1[75].contents[0],
'24. Away Tactic Level': s1[76].contents[0],
'25. Local Score': float(d2[0]),
'26. Away Score': float(d2[1])
df_ht.loc[i,:] = D
except:
cont.append(i)
df_ht.to_csv(r"Datos9.csv")
web-scrapping
web-scrapping
New contributor
New contributor
New contributor
asked 3 mins ago
Juan Esteban de la CalleJuan Esteban de la Calle
35811
35811
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Juan Esteban de la Calle is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49440%2fweb-scrapping-with-beautifulsoup-is-done-slowly%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Juan Esteban de la Calle is a new contributor. Be nice, and check out our Code of Conduct.
Juan Esteban de la Calle is a new contributor. Be nice, and check out our Code of Conduct.
Juan Esteban de la Calle is a new contributor. Be nice, and check out our Code of Conduct.
Juan Esteban de la Calle is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49440%2fweb-scrapping-with-beautifulsoup-is-done-slowly%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown