The Fall of a Superhero

Recently I have been reading through a newer edition of Young & Freedman’s University Physics, the main course textbook when I was a physics undergrad. Chapter 2 contains the following basic problem concerning 1-dimensional motion with constant acceleration. Since the book does not contain an answer or solution I decided to post a solution here. I highly recommend this book for undergraduate physics students and anyone who wants an excellent overview of the subject.

Question:

The Green Lantern is standing at the top of a tall building with height h and steps off, falling freely from rest to the ground below. He falls half of the total distance in the last 1.0 seconds of his fall. What is the height of the building?

Tama66 / Pixabay

Solution:

We can start by making a sketch of the problem. I personally find it preferable to re-formulate the problem into a very general timeline, with x the displacement from the starting position, as shown below. We know that he starts with zero velocity at time zero. Halfway down the building he has travelled h/2 metres but we do not yet know how long it took or his velocity at the halfway point. Finally, at the end point we know that the time is equal to the time at the halfway point plus one second, the total displacement is h. We do not yet know the final velocity.  The only force acting on the falling object is gravity so we have constant accleration of g = 9.8 metres per second squared.

 

First half of fall:

We have the following equation for motion with constant acceleration (see the text book for details):

v_{1}^{2} = v_{0}^{2} + 2a(x_{1} - x_{0})

Substituting the known values for v_{0} , a , x_{0} and x_{1}:

v_{1}^{2} = 0 + 2g(\frac{h}{2} - 0) = gh

v_{1} = \sqrt{gh}

Second half of fall:

Now for the second half of the fall: Another equation of motion derived in the text book is

x_{2} = x_{1} + v_{1}(t_{2} - t_{1}) + \frac{1}{2} a (t_{2} - t_{1})^{2}

Substituting known values gives us:

h = \frac{h}{2} + v_{1}(1) + \frac{1}{2} g (1) 

Subtracting h/2 from both sides and tidying up:

\frac{h}{2} = v_{1} + \frac{g}{2}

Combining by substitution:

We can then substitute our expression for v_{1} :

\frac{h}{2} = \sqrt{gh} + \frac{g}{2}  

h = 2\sqrt{gh} + g 

We now have an equation with only one unknown, for h (remember that the acceleration due to gravity, g, is known). This is almost in quadratic form if we recognize that it is equal to

h = 2 \sqrt{g} \sqrt{h} + g 

If we substitute \alpha = \sqrt{h}

then we have a quadratic equation be rearranging so that

\alpha ^{2} - 2 \sqrt{g} \alpha - g = 0

and we can solve for \alpha using the well-known equation:

\alpha = \frac{-b \pm \sqrt{b^{2} - 4ac}}{2a}

where a=1, b=-2\sqrt{g} and c=-1 .

\alpha = \frac{2\sqrt{g} \pm \sqrt{4g + 4g}}{2} = \frac{2\sqrt{g} \pm \sqrt{8g}}{2} = \frac{2\sqrt{g} \pm \sqrt{4}\sqrt{2}\sqrt{g}}{2} = \frac{2\sqrt{g} \pm 2\sqrt{2}\sqrt{g}}{2} = \sqrt{g} \pm \sqrt{2g}

This means that

\sqrt{h} = \alpha = \sqrt{g} \pm \sqrt{2g} = \sqrt{g}( 1 \pm \sqrt{2} )

The quadratic equation gives us two possibilities for the height of the building. The trick here is to recognize that, not only is it not possible for  the height of a building to be negative, but it is also not possible for the square root of the height of a building to be negative! So the only possible value for h ends up being

The quadratic equation gives us two possibilities for the total displacement h. The trick here is to recognize that, since it wouldn’t make sense for the final displacement to be negative, it also wouldn’t make sense for the square root of the final displacement to be negative. We clearly defined the acceleration to be in the positive direction and since we are starting from zero velocity we cannot end up with negative displacement! So the only valid value for h ends up being

h = g(1  + \sqrt{2} )^{2} = g( 1 + 2\sqrt{2} + 2) = g(3 + 2\sqrt{2}) = 9.8(3 + 2\sqrt{2}) = 57.1 m

 




How to get up-to-date Python packages without bothering your cluster admin

If you have ever been stuck as a user on an out-of-date cluster without root access it can be frustrating to ask the admin guy to install packages for you. Even if they respond, by the time they get round to it you might have moved onto something else. The moment could be gone.

Luckily, as far as Python is concerned, the pyenv project allows users to install their own local Python version or even assign different versions to different directories/projects.

Sir Andrew Smith - A. Smith: Illustrations of the zoology of South Africa, Reptilia. Smith, Elder, and Co., London 1840 PYTHON NATALENSIS (Southern African Python) (Reptilia Plate 9) in A. Smith: Illustrations of the zoology of South Africa, Reptilia. Smith, Elder, and Co., London 1840

Public domain image.

João Moreira has written a great beginner’s guide on the Amaral Lab homepage in order to get started. I now have the latest version of Python 2 (v2.7.12) installed along with essential packages like Scipy and Pandas, which I added using pip.

Installation of pyenv is easy.

curl -L https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash 

Different versions of python can then be installed with

pyenv install 3.4.0

Switching your global Python version is then as simple as typing

pyenv global 3.4.0

From first impressions I can say I highly recommend pyenv, and will continue to learn about it over the coming days through using it. Please refer to João’s excellent post for more details.




Plotting multivariate data with Matplotlib/Pylab: Edgar Anderson’s Iris flower data set

The problem of how to visualize multivariate data sets is something I often face in my work. When using numerical optimization we might have a single objective function and multiple design variables that can be represented by columnar data in the form {x1, x2, x3, … xn, y} a.k.a. NXY. With design spaces of more than a few dimensions it is difficult to visualize them in order to estimate the relationship between each independent variable and the objective, or perform a sensitivity study.

JensG / Pixabay

While perusing recent work in and tools for visualizing such data I stumbled across some nice examples of multivariate data plotting using a famous data set known as the “Iris data set”, also known as Fisher’s Iris data set or Edgar Anderson’s Iris flower data set. It contains data from 50 flowers each of three different flower species, collected in the Gaspé Peninsula. This set is not in the NXY form typical of optimization routines, but instead each flower has a number of parameters measured and tabulated; namely sepal length, sepal width, petal length and petal width. In other words there is no resultant Y data that is a function of the design space vector. Instead, it is interesting to plot relationships between the measured parameters to determine if they correlate with each other.

A quick internet search brings up a number of examples where the set has been plotted as a gridded set of subplots, using various software tools. For example, Mike Bostock’s blog post demonstrating his D3.js package, and the version on the wikipedia page.

I decided to try and code a Matplotlib script to generate a similar gridded multiplot from the data set. I did so within a Jupyter Notebook (formerly known as iPython Notebook) running Python 2.7. The data was imported using Pandas and made use of Matplotlib’s Pyplot module. Pandas was used to import the data but it could have been done in a number of different ways; it is just that Pandas is designed to work with csv files containing a mix of types.

The resulting image can be seen below.

Iris flower data set visualization using Matplotlib/pyplot.

Fisher’s Iris data set sometimes known as Anderson’s Iris data set, visualization by Simon Bance using Matplotlib/Pyplot. A multivariate data set introduced by Ronald Fisher in 1936 from data collected by Edgar Anderson on Iris flowers in the Gaspé Peninsula.

Here is the script:


"""
https://en.wikipedia.org/wiki/Iris_flower_data_set
A script for plotting multivariate tabular data as gridded scatter plots.
"""
import os
import pandas as pd
import matplotlib.pyplot as plt

inFile = r'iris.dat'

# Check if data file exists:
if not os.path.exists(inFile): sys.exit("File %s does not exist" % inFile)

rootFolder = os.path.dirname(os.path.abspath(inFile))

# Read in the data file
df = pd.read_csv(inFile, delimiter="\t")
headers = list(df.columns.values)
df.head(5) # Prints first n lines to check if we loaded the data file as expected.

# We also have n=4 distinct species in the Species column and I will
# list the species names so we can distinguish them later for plotting:
species = list(df.Species.unique()) # normal python list, thank you very much!
print type(species)

# Here we specify how many columns prepend and append the columns that we want to use.
# For Dakota this would include the objective function(s) column(s) appended to the end.
num_precols = 0
num_obj_fn = 1

# Work out the number of dimensions in each design vector:
num_dims = df.shape[1] - num_obj_fn # We know that there are 3 additional columns (and hope that it stays consistent in future)!
print "Our design vector has %s dimensions: %s" % (num_dims, headers[num_precols:-1])
gridshape = (num_dims, num_dims)
num_plots = num_dims**2
print "Our multivariate grid will therefore be of shape", gridshape, "with a total of", num_plots, "plots"

# Plot the data in a grid of subplots.
fig = plt.figure(figsize=(12, 12))

# Iterate over the correct number of plots.
n = 1

# Create an empty 2D list to store created axes. This alows us to edit them somehow.
axes = [[False for i in range(num_dims)] for j in range(num_dims)]

for j in range(num_dims):
for i in range(num_dims):

# e.g. plt.subplot(nx, ny, plotnumber)
ax = fig.add_subplot(num_dims, num_dims, n) # Plot numbering in this case starts from 1 not zero (MATLAB style indexing)!

# Choose your list of colours
colors = ['red', 'green', 'blue']

for index, s in enumerate(species):

# x axis: For each in the species list look at all rows with that value in the Species column.
# Use the ith column of that subset as the x series.
# y axis: Likewisem, but use the jth column.

if i != j:
ax.scatter(df.where(df['Species'] == s).ix[:,i], df.where(df['Species'] == s).ix[:,j], color=colors[index], label=s)
else:
# Put the variable name on the i=j subplots:
ax.text(0.25, 0.5, headers[i])
pass

# Set axis labels:
ax.set_xlabel(headers[i])
ax.set_ylabel(headers[j])

# Hide axes for all but the plots on the edge:
if j < num_dims - 1: ax.xaxis.set_visible(False) if i > 0:
ax.yaxis.set_visible(False)

if i == 1 and j == 0:
ax.legend(bbox_to_anchor=(3.5, 1), loc=2, borderaxespad=0., title="Species name:")

# Add this axis to the list.
axes[j][i] = ax

n += 1

plt.subplots_adjust(left=0.1, right=0.85, top=0.85, bottom=0.1)

plt.savefig("%s/iris.png" % rootFolder, dpi=300)
plt.show()

Further so-called “classic data sets” are listed at https://en.wikipedia.org/wiki/Data_set#Classic_data_sets.




The new default colormap for matplotlib is called “viridis” and it’s great!

It’s probably not news to anyone in data visualization that the most-used “jet” colormap (sic) (sometimes referred to as “rainbow”) is a bad choice for many reasons.

  • Doesn’t work when printed black & white
  • Doesn’t work well for colourblind people
  • Not linear in colour space, so it’s hard to estimate numerical values from the resulting image

The Matlab team recently developed a new colormap called “parula” but amazingly because Matlab is commercially-licensed software no-one else is allowed to use it!
The guys at Matplotlib have therefore developed their own version, based on the principles of colour theory (covered in my own BSc lecture courses on Visualization 🙂 ) that is actually an improvement on parula. The new Matplotlib default colormap is named “viridis” and you can learn all about it in the following lecture from the SciPy 2015 conference (YouTube ):

Viridis will be the new default colour map from Matplotlib 2.0 onwards, but users of v1.5.1 can also choose to use it using the cmap=plt.cm.viridis command.
I don’t know about you, but I like it a lot and will start using it immediately!




[PDF] “Grain-size dependent demagnetizing factors in permanent magnets” reprint update

The reprint of our Journal of Applied Physics (JAP) paper “Grain-size dependent demagnetizing factors in permanent magnets” has been updated since the old version was not being discovered by the Google Scholar crawler.

There is also now a version on arXiv. I hope that Google Scholar will now correctly index the paper so that it’s easier for people to find!

The full, correct reference for the paper is:

S. Bance, B. Seebacher, T. Schrefl, L. Exl, M. Winklhofer, G. Hrkac, G. Zimanyi, T. Shoji, M. Yano, N. Sakuma, M. Ito, A. Kato and A. Manabe, “Grain-size dependent demagnetizing factors in permanent magnets”, J. Appl. Phys. 116, 233903 (2014); http://dx.doi.org/10.1063/1.4904854