Technical Development As A Region: Part 2

Read Part 1 of this series here. Part 3 is coming soon.

Where we last left off, we had been discussing how to get data from the NationStates API. Of course, the question then becomes: what do you do with this data? It displays the information we want, but it is an odd format – and what are these tags? They’re ugly!

Those tags are called XML, short for Extensible Markup Language. XML is a data format that is easily readable to computers, but does not really attempt to be easily readable to humans. However, there is a simple solution. Python and its community, as usual, provides a library called ‘BeautifulSoup4’, which enables you to fetch information and look through it (known as ‘parsing’) in an easy and convenient manner.

Screenshot of an API call to check the number of nations in The Rejected Realms.

The documentation is somewhat lengthy but, overall, it is fairly easy to understand. The key points are that you need to install what is called a ‘parser library’ in order to use BeautifulSoup, and that there is a specific format for how to parse things.

To make things simple, we are going to use a fairly standard parser library, lxml. First, let’s install the libraries we’ll need. Run “python3 -m pip install bs4 lxml” and make sure there are no errors in the output. Now you are ready to do some parsing!

Open up your preferred editor – I use PyCharm Professional, although there is also a Community Edition available, along with other products like Visual Studio, and Visual Studio Code. This choice is up to personal preference. Remember the script from the last article? We’re going to expand on it and use its output to get the results we want:

import requests 
# Import the requests library to enable HTTP(S) requests, although they’re synchronous.
from bs4 import BeautifulSoup
# Import the BeautifulSoup parser from the bs4 package.
headers = {“User-Agent”: “Your informative and descriptive useragent here. Preferably something like your nation name.”}
# Set request headers for the NS API to be able to know your identity.
params = {“q”: “numnations”}
# Set our request parameters so that the API knows what data to return.
r = requests.get(“”, headers=headers, params=params)
# Action the request and store the result as a requests object.
soup = BeautifulSoup(r.text, “lxml”)
# Initialize the BeautifulSoup parser using the lxml library, and parse the XML response from the API.
# Print string-type content from the soup object and specify the XML tag you wish to print.

If all goes well, you should get something like this as your result:

This image is taken directly from the terminal, rather than an IDE or editor, so your layout may vary.

This process is fairly simple and can be done for any shard in the NS API. You might be wondering: what does “soup.numnations.string” mean? That is a reference to the object, soup, an XML tag in the page, numnations, and the type of data that we want to print, string. You can adjust this for each shard that you call, so if you were to call the regions shard, you’d type “soup.regions.string” to get a string of the response. You want a string because you are trying to get the response in a human-readable format.

And that is it – you have now learned how to simply parse any result from the NaionStates API! Next in the series, I will cover what you can do with your data and how to automate some basic tasks that you might encounter in your region.

We accept guest contributions!

If you have a knack for writing, have a strong opinion about something on NationStates, or would like to cover something often overlooked, we would love to hear from you! If you are interested, join our Discord server and message Chief Content Officer Blyatman (@WhoAmIToDisagree#7584).