Check it out: http://wspr.io
So smooth and shiny.
Matlab has a number of methods for interpolating data, both for data that is sampled on a regular grid and for data that is “scattered”, or randomly distributed.
Plotting surfaces over grid points is easy using Matlab’s surf command, and interpolation of that data to get smoother plots is straightforward. For example,
[x,y,z] = peaks(10);
Generally I recommend avoiding 3D plots, so in 2D (
y are 10×10 matrices defined by (the equivalent of)
z is the value at each point in (x,y) space.
If you have sampled data with non-uniform spacing, however, it’s not as obvious how to plot that data. In this case your data would be organised with
z as column vectors with each measurement per row. The structure of the
surf command simply doesn’t handle data in this form, since the data isn’t organised in a way that allows adjacent points to be connected.
Matlab provides commands for analysing this data using Delaunay Triangulation, for which it also provides an analogue to
surf. Here’s an example:
N = 15;
rng(13); % repeatable random values
x = randn(1,N); % X & Y coords
y = randn(1,N);
z = randn(1,N); % "value" at each coordinate
After a little digging, you’ll find that the easiest way to plot this data as a surface is:
Seems slightly redundant, but easy enough.
Surf plots with coarse grids
The problem is that surface plots are poor at visualising data over coarse meshes, since it’s their corners which define the values. This can be seen in the following example, using
peaks to plot a surface:
The first row shows plain
surf. It can be seen that the colours of each “patch” don’t do a good job of representing their height (or “value”). In the second row, Matlab’s interpolating shader is used to “improve” this. It’s an improvement, but distortions are obvious where there are large changes in both directions. (Colours are smeared along diagonals, basically.)
Finally, in the third row an interpolation function is used to generate more data points. This is quite a coarse interpolation, but couldn’t go finer without the edges taking over in the 3D plot. The plot looks far more convincing. Gridded interpolation can be done in a number of ways; for this example I used
F = griddedInterpolant(x',y',z');
[x2,y2] = ndgrid(linspace(-3,3,20));
(The transpose being necessary from coordinates output from
peaks. Good old Matlab.)
Surf plots with coarse scattered data
But what about scattered data? While Matlab does provide
scatteredInterpolant, this form is only really convenient if you want to interpolate onto a regular grid. What if you just want to improve the look of your overly-coarse surface plot? Here’s the example using trisurf described above, with and without interpolated shading (
Note that without the interpolated shading, the plot is basically unusable. But even with interpolated shading, there’s significant distortion along certain diagonals. I’ve left out the edge lines in the bottom-right graph to emphasise this. If we resampled this function over a regular grid using
scatteredInterpolant, we’d end up with points outside the original region, which might not always be appropriate. We could then delete the outsiders, but that would require additional processing. Wouldn’t it be nice to simply resample within the original?
To do this requires a couple steps. First, extract the Delaunay Triangularisation of our points:
DT = delaunay(x,y);
This function returns a list of 3xM indices into x and y, where M is the number of triangles. So if we extract the coordinates for each triangle and average them, we obtain the mid-point of each, including an interpolated value:
x2 = mean(x(DT),2);
y2 = mean(y(DT),2);
z2 = mean(z(DT),2);
This figure shows the grid that results after interpolating once and re-applying the Delaunay Triangularisation: (black before, red after)
Note that the Delaunay algorithm doesn’t simply divide up the triangles into smaller components. We can then perform this iteratively to interpolate to an arbitrary level of detail:
And after removing the grid lines and increasing the order, the results look pretty nice:
Interpolation will never solve the problem of under-sampling if there’s information missing, but it definitely helps to draw more pleasing figures.
Although pretty basic, I’ve uploaded this code to the Matlab file exchange under the name delaunay_surf, and added it to my generic Matlab repository on Github: https://github.com/wspr/matlabpkg. It’s pretty basic:
p = delaunay_surf(x,y,z,'N',4);
with a few additional options. Check out the example file for more, er, examples.
I have a love/hate relationship with Matlab, the details of which I probably obliquely mention here from time to time. The biggest reasons that I stick with Matlab are (a) I have to teach my students to use it, and (b) for all its warts, its IDE is quite nice for code authoring/debugging, exploring data sets, and even drawing somewhat sophisticated 3D diagrams.
I’ve just installed Matlab 2015a on my laptop, and am pretty happy to also see that Matlab now supports Git: (this appears to have been added in 2014b, so I’m a little behind the times)
I usually use a lightweight GUI to perform simple Git operations (GitX on Mac) and the command line for anything more sophisticated. Most common Git operations can be performed from a clunky interface that involves right-clicking on a file, selecting “Source Control” from the menu, and then selecting from a list. Rather like TortoiseGit on Windows, I guess. There are a few impedance mismatches, such as the term “Compare to Ancestor”, which is Matlab’s version of performing “git diff” to see what you’ve changed in the current file. I think trying to abstract multiple version control systems under a consistent set of Matlab commands is probably a recipe for disaster, long term, but I can understand the motivation.
I like that the Git status is indicated in the file listing, but I suspect I’ll rarely, if ever, use the Matlab interface to actually perform source control. I’m a little disappointed that there isn’t a better user interface than delving into contextual menus and receiving little more than plain info in pop-up windows. For example, what I’d like to see would be to double click a “changed” icon (blue square in the above screenshot), which would show the current differences from the last commit, and an integrated text field and nice big button to commit the changes. Not as easy, but more likely to get keep people within Matlab.
What I’m really hoping Git support in Matlab makes it possible to slowly introduce my undergraduate students to Git (engineers learning source control is a subject I hope to discuss another time), and perhaps more importantly I hope it will help facilitate collaboration with research students that work on my code. Fingers crossed.
My real work in magnetics involves evaluating sometimes complex integrals that often result in solutions that include the elliptic integrals. These are a funny set of functions that I’ve discussed before.
Matlab has historically only included the bare minimum here: in-built function to calculate the first and second complete elliptic integrals. Hence Igor Moiseev’s valuable work in this area (now moved to Github).
Those with a variety of Matlab toolboxes, however, will usually run into the newish functions provided by the symbolic toolbox
ellipticE, and friends. (Quite an impressive collection, actually.) However, if you actually wish to use these functions for something computationally intensive, such as numerical integration, it’s a bad idea to use them. Why?
>> tic; ellipticF(0.5,0.5), toc ans = 0.5105 Elapsed time is 0.020991 seconds.
That’s a small fraction of a second — seems fast! Compare with Igor’s native Matlab function:
>> tic; elliptic12(0.5,0.5), toc ans = 0.5105 Elapsed time is 0.000705 seconds.
So the symbolic toolbox is fine if you just need to evaluate a value or two. But for anything iterative — and this holds for algorithms in general, not just for my own example of numerically integrating elliptic integral functions — you’ll want to avoid the overhead of switching to the symbolic toolbox mid-calculation.
(This is an example of why I find Mathematica a more enjoyable computational environment; no alien toolboxes.)
This is I’m sure old news to Excel gurus out there (of which I’m reluctantly, seemingly, turning into).
I started having a little more appreciation for Excel when I began using
vlookup more seriously; this allows a semi-structured way of dealing with linked data in Excel. E.g., when a list of grades has a different set of students than a class list, you can still sensibly lookup values without manually copy/pasting information from one to the other.
I suppose an actual example is in order:
I hope this is fairly obvious; we have an extremely basic lookup-style functionality (hence the name) where the function looks up
"c" in the first column and outputs the corresponding entry in row
2. You’d obviously usually use a reference to another cell rather than hard-code the
"c" entry in there.
So this is great, but comes with some problems.
First of all,
vlookup of course has the incredibly frustrating default behaviour of not exactly matching entries; in my own experience, things never work out unless you write it in the form
vlookup(•,•,•,FALSE) — and if I’m rusty I never remember whether I should be writing true or false as the argument in there. So that’s one reason
vlookup is dumb.
Second of all,
vlookup breaks in a completely transparent manner if you ever alter the columns in your original query. Perhaps I am re-arranging the spreadsheet so that there is a grade column as well:
Unlike what happens in a lot of other Excel cases, inserting the new column hasn’t updated the
vlookup equation (not saying it should, though). And our lookup now obviously returns different data. This can bite you pretty bad unless you have stringent error checking somewhere along the line.
On the other hand, the useful replacement for
vlookup does NOT have this same problem. This solution is called something like
index+match instead, and you should not be intimidated from using it because it’s slightly more complex. In fact, I find it easier to use in most cases:
Translating that formula into prose produces something like this: “index into column D the row that matches “c” in column C”. Note again Excel’s insistence that you need a dumb suffix to tell
match to do its thing properly; don’t forget the
index+match better? I’m glad you asked:
As soon as we inserted the extra column, the
match function automatically updated and our formula is still correct. You can argue more tenuous advantages about
index+match (e.g., that
vlookup can’t have a negative column index), but automatic updating is the killer.
This image perfectly captures a notion I’ve had floating around in my noggin for years now. Via FrenchKheldar on Stack Exchange
I’ve recently been interested human body modelling in biomechanics. One of the honours projects I’m supervising has a goal of producing a mock lumbar spine, which lead us to a Thingiverse example of exactly that (our will have additional features for attaching fake ligaments and disks, however).
This example linked to the BodyParts3D project, which consists of thousands of 3D models taken from MRI data hand-segmented and manually augmented where necessary using known anatomical information (documented in their paper).
The project consists of an online ‘anatomography’ program, which can be used to browse the database and produce viewable subsets for visualisation, as well as the database of body parts itself. The body parts are in the OBJ format, which I know little about but seems straightforward enough, and there are two resolutions available.
Using the low-resolution models, I’ve imported the bones of the model into Matlab using Dirk-Jan Kroon’s Wavefront OBJ toolbox; the results are shown below. I’ll tidy up the code and make it public as soon as I can; a few things on my plate before I can do this, however.
Long term, I’d like to make the skeleton move and use the anthropometric data to annotate tendon attachments and insertions on the skeleton to start producing an anatomically correct dynamic simulation of human motion. I know this is being done by OpenSim already and it would be counterproductive to re-invent the wheel, but there is still some scope for some orthogonal research in this area.