# [Lab](../) > [Data](../data/) > [Web Scraping](../web-scraping/) > [Web Scraping with Python: Extracting Text](web-scraping-extracting-text.md)
<br><br>
<p>This tutorial is a continuation of the <a href="https://github.com/NickGilmore/Python_Web_Scraping">Web Scraping with Python</a> tutorial.</p>
---
## Introduction
As we have seen in our previous tutorials, web scraping is a powerful tool for extracting data from websites. In this tutorial, we will learn how to extract text using the `BeautifulSoup` library and regular expressions. We'll use the following libraries:
```
pip install beautifulsoup4
pip install re
```
These libraries allow us to parse HTML pages, search for specific tags and attributes, and extract text. We will also learn how to handle URLs and HTTP requests using Python's built-in libraries.
## The Basics of BeautifulSoup
BeautifulSoup is a Python library that allows us to parse HTML and XML documents. It provides an easy way to navigate the structure of these documents and extract text. In this tutorial, we will use version 4.7.1.
### Installing BeautifulSoup
To install BeautifulSoup, simply run the following command in your terminal or command prompt:
```python
pip install beautifulsoup4
```
This will install the latest version of the library.
### Creating a BeautifulSoup object
To create a BeautifulSoup object, we first need to parse an HTML or XML document. We can do this using one of the `BeautifulSoup` constructors:
```python
from bs4 import BeautifulSoup
import requests
url = 'https://www.example.com'
page = requests.get(url)
content = page.content
soup = BeautifulSoup(content, 'html.parser')
```
In this example, we are making an HTTP request to a URL using the `requests` library, and then parsing the HTML content of the response using BeautifulSoup's constructor. The first argument specifies the type of document we want to parse (in this case, an HTML document), and the second argument is the parser we want to use (we are using the default parser, `html.parser`). The resulting BeautifulSoup object will contain all the elements of the HTML document.
### Navigating the DOM tree
To navigate the DOM tree of an HTML document, we can use the `find()`, `find_all()`, and other methods provided by BeautifulSoup. These methods allow us to search for specific tags or attributes in the DOM tree, and return a list of matching elements. Here are some examples:
```python
# Find all `<p>` tags in the document
paragraphs = soup.find_all('p')
# Find the first `<h1>` tag in the document
header = soup.find('h1')
# Get the text content of an element
text = header.get_text()
# Remove all whitespace from a string
cleaned_text = text.strip()
```
In this example, we are using `find_all()` to find all `<p>` tags in the document, and storing them in a list called `paragraphs`. We are also using `find()` to find the first `<h1>` tag in the document, and extracting its text content using the `get_text()` method. Finally, we are using the `strip()` method to remove all whitespace from a string.
### Searching for specific tags and attributes
In addition to finding all elements of a certain type, we can also search for specific tags or attributes using BeautifulSoup's methods. For example, if we want to find all `<a>` tags with the `href` attribute set to a specific value, we can use the following code:
```python
# Find all links with href='https://www.example.com/contact'
links = soup.find_all('a', href='https://www.example.com/contact')
```
In this example, we are using `find_all()` to find all `<a>` tags with the `href` attribute set to `'https://www.example.com/contact'`. We can then iterate over these links and extract their text content or other attributes as needed.
### Looping over elements
We can also loop over elements of a BeautifulSoup object using the `iter()` method:
```python
# Iterate over all elements in the document
for element in soup:
print(element)
```
In this example, we are using `iter()` to create an iterator over all elements of the BeautifulSoup object. We can then loop over these elements and extract their attributes or text content as needed.
### Handling URLs and HTTP requests
To make an HTTP request and handle URLs with BeautifulSoup, we can use the `requests` library in combination with BeautifulSoup. Here's an example:
```python
from bs4 import BeautifulSoup
import requests
# Define a function to extract links from a webpage
def extract_links(url):
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
links = []
for link in soup.find_all('a'):
href = link.get('href')
if href:
links.append(href)
return links
```
In this example, we are defining a function called `extract_links()` that takes a URL as an argument and returns a list of all the links on the page. We are using `requests` to make an HTTP request to the URL, and then parsing the HTML content using BeautifulSoup's constructor. We are then iterating over all `<a>` tags in the document and extracting their `href` attribute value using the `get()` method. If a link is found, we append it to the list of links.
We can now use this function to extract links from any webpage:
```python
url = 'https://www.example.com'
links = extract_links(url)
print(links)
```
In this example, we are calling the `extract_links()` function with a URL and storing the resulting list of links in the `links` variable. We can then print out these links to verify that they are correct.
## Searching for Text with Regular Expressions
In addition to using BeautifulSoup's methods to extract text from HTML documents, we can also use regular expressions to search for specific patterns of text. Regular expressions are a powerful tool for working with text data, and allow us to perform complex searches on strings and files. We will learn how to use the `re` library to work with regular expressions in this tutorial.
### Installing re
To install the `re` library, simply run the following command in your terminal or command prompt:
```python
pip install re
```
This will install the latest version of the library.
### Searching for Text with Regular Expressions
To search for text using regular expressions in Python, we can use the `re` module. Here are some examples:
```python
import re
# Define a function to search for text using a regular expression
def search_text(text, pattern):
match = re.search(pattern, text)
if match:
return match.group()
else:
return None
```
In this example, we are defining a function called `search_text()` that takes two arguments: the text to search and the regular expression pattern to use. We are using the `re.search()` method to search for the pattern in the text, and if a match is found, we return the matched text using the `group()` method. If no match is found, we return `None`.
We can now use this function to search for specific patterns of text:
```python
text = 'This is an example text string containing the phrase "hello world".'
pattern = r'hello\s.*world'
match = search_text(text, pattern)
print(match)
```
In this example, we are searching for the pattern `r'hello\s.*world'` in a text string. The regular expression pattern uses a backslash before the space character to escape it and allow it to be used as a literal space in the pattern. The `.*` matches any sequence of characters between zero and unlimited times, which allows us to match any number of characters after "hello". If the pattern is found in the text string, we print out the matched text using the `match` variable.
## Extracting Text from Web Pages
Now that we have learned how to use BeautifulSoup and regular expressions to extract text from HTML documents and search for specific patterns of text, let's put these skills together to extract text from web pages. We will learn how to use BeautifulSoup and regular expressions to extract text from a web page and write it to a file.
### Extracting Text from a Web Page using BeautifulSoup
To extract text from a web page using BeautifulSoup, we can use the `extract_links()` function that we defined earlier in this tutorial. We will modify this function to also extract the HTML content of each linked page and write it to a file:
```python
from bs4 import BeautifulSoup
import requests
import re
# Define a function to extract links from a webpage
def extract_links(url):
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
links = []
for link in soup.find_all('a'):
href = link.get('href')
if href:
links.append(href)
return links
```
In this modified version of the `extract_links()` function, we are also using `requests` to make an HTTP request to the URL and BeautifulSoup to parse the HTML content. We are then iterating over all `<a>` tags in the document and extracting their `href` attribute value using the `get()` method. If a link is found, we append it to the list of links.
### Writing Text to a File
To write text to a file, we can use Python's built-in `open()` function:
```python
# Define a function to write text to a file
def write_to_file(filename, text):
with open(filename, 'w') as f:
f.write(text)
```
In this example, we are defining a function called `write_to_file()` that takes two arguments: the filename to use and the text to write. We are using Python's built-in `open()` function to open the file with the specified filename in write mode ('w'). We are then writing the text to the file using the `write()` method.
### Putting it All Together
Now that we have defined these two functions, we can use them together to extract text from a web page and write it to a file:
```python
# Define a function to extract links from a webpage and write text to a file
def extract_text_from_webpage(url):
links = extract_links(url)
for link in links:
page = requests.get(link)
soup = BeautifulSoup(page.content, 'html.parser')
text = soup.get_text()
pattern = r'hello\s.*world'
match = search_text(text, pattern)
if match:
write_to_file('output.txt', match)
```
In this example, we are defining a function called `extract_text_from_webpage()` that takes a URL as an argument. We are using the `extract_links()` function to extract all the links from the web page and then iterating over these links. For each link, we are using `requests` to make an HTTP request to the linked page and BeautifulSoup to parse the HTML content. We are then using the `get_text()` method to get the text of the page and searching for the pattern `r'hello\s.*world'` using the `search_text()` function. If a match is found, we are using the `write_to_file()` function to write the matched text to a file called "output.txt".
We can now use this function to extract text from a web page and write it to a file:
```python
url = 'https://www.example.com'
extract_text_from_webpage(url)
```
In this example, we are calling the `extract_text_from_webpage()` function with a URL as an argument. The function will extract all the links from the web page and search for the pattern "hello world" in each linked page. If a match is found, it will write the matched text to a file called "output.txt".
## Conclusion
In this tutorial, we have learned how to use BeautifulSoup and regular expressions to extract text from HTML documents and web pages. We have also learned how to search for specific patterns of text using regular expressions. Finally, we have put these skills together to extract text from a web page and write it to a file.
By following the steps in this tutorial, you should be able to extract text from any web page and save it to a file. You can also modify the code to search for specific patterns of text or perform other operations on the extracted text.