Fast synchrotron calculations with neural networks: neurosynchro

It’s a truism in science that when you set out to solve big, inspiring questions you will soon find yourself mired in the technical details of some preposterously obscure topic. That’s surely the case for my work — while the main thrust of my astrophysics research is to study the magnetic fields of other stars and planets, one of the things I’ve been trying to think about a lot lately is the efficient computation of numerical parameters for polarized synchrotron radiative transfer. So it goes.

One of the tradeoffs of my new job is that I have a lot less time for doing astrophysics research myself, so this work is moving forward slowly these days. But I did manage to make something neat last year, and now I have time to write about it! It’s neurosynchro, an open-source Python package for training and using neural networks to compute approximate synchrotron radiative transfer coefficients. You can jump right into a tutorial, installation instructions, and the GitHub repository.

The basic problem that neurosynchro is trying to help solve is polarized synchrotron radiative transfer. Let’s break it down:

Why might you care about all this? One prominent example is if you’re in the business of making images of black hole shadows like the Event Horizon Telescope just did! The black hole itself emits no light, but the relativistic plasma around it does — via the synchrotron process. It’s cool to take the picture, but to actually get scientific insight from the data, you need software for polarized synchrotron radiative transfer to connect the data to a model.

When you design such software, you can basically break the problem into three pieces. First, you need a model of what the magnetic fields and particles are like. Second, at each point between you and the radiation, you need to calculate eight special numbers that describe the how the radition is emitted and modified. Third, you need to put those numbers into a matrix integral computation that gives you your final answer.

It turns out that the second step is really tricky. Even in relatively simple models, each of the eight numbers you need to compute is a double integral based on complex physics. It’s a whole research effort to write a program to do these calculations, and the calculations are often slow and finicky. I would claim that the other two steps are relatively straightforward, although there’s a whole literature on radiative transfer integrators as well.

Neurosynchro helps with this second step. It does not do these tricky calculations itself. Instead, it helps you create a function that can give you an approximation to the output of a detailed calculation, quickly and reliably. So if you have a program that does one of these calculations, you can have one big session where you generate a bunch of sample outputs, then use the fast approximation in your transfer calculations. Compared to my Rimphony code — which will get a post of its own, sooner or later — neurosynchro can speed calculations up by a factor of 10,000 while maintaining good accuracy.

How do we generate this approximation? Well, what I’ve described above is exactly the process of using an artificial neural network! One way of thinking about neural networks is that they are tools for automatically approximating multi-dimensional functions, which is exactly the task here. The approximation is “trained” using the outputs of a detailed calculation program, which is slow. But once that’s done, it’s fast to compute new answers for arbitrary inputs.

There are two especially tricky parts that neurosynchro takes care of. First, the numbers that describe the radiative transfer matrix must obey a particular set of relationships, even as they vary over a wide range of magnitudes. Neurosynchro automates a group of parameter transformations that both ensure that you don’t get bogus results, and make it much easier for the neural network training to succeed.

Second, different physical models take different input parameters. In some cases, you might care about a parameter p describing a power-law distribution of particle energies. In other cases, it might be θ describing a characteristic temperature. Neurosynchro gives you a common framework for describing these inputs, making it possible to use it as a common interface to a variety of physical models and detailed synchrotron calculation codes.

I think this is one of the neatest things about neurosynchro. Traditionally, if you want to use a synchrotron calculation code, you need to install it, figure out its interface, discover its strengths and weaknesses, and find a way to apply it to whatever problem you’re trying to solve. With the neural network approach, this undertaking can instead be broken down into several, nicely distinct steps:

  1. If you’re starting totally from scratch, you need to create a training set with your detailed calculation code. Neurosynchro specifies a very simple input data format, so no matter what code you’re using it should be very easy to generate the training set — you just need to burn a lot of CPU-hours. And, crucially, once you’ve created your training set, you’re done. You don’t need to rerun those slow calculations ever again, and other folks don’t need to either! You can share the training data online, as I have done for this Rimphony example. Now anyone else can use the data as an input for their own neural network.
  2. If you want to create your own specialized approximator, you need to download or compute a training set. Then you need to describe the structure of the data and run the training process, which can take a little while. The great thing is that the output of this process is once again a simple, portable dataset: here is a network trained on my Rimphony data. If you want to spend time optimizing the accuracy and behavior of the network, you can do it without rerunning the really big computations to generate the training data in the first place, and then you can easily share the results with others.
  3. But if all you care about is getting the eight magic numbers, the only thing you need to do is to download a pre-trained network. The data file is small (160 kiB for the example above), the resulting computation is extremely fast, and the technology is built on popular machine learning tools so that the installation is generally easy.

Say, for instance, that someone has developed a whiz-bang new synchrotron calculator that they claim will solve all of your problems. Historically, it’s a major investment to try out their code and see how it performs. With neurosynchro, you just have to download about a megabyte of data and swap out a file. And did I mention that it can be more than 10,000 times faster than running the detailed calculations?

I need to write all of this up in a detailed paper where I really dig into the performance and limitations of this model, but I hope that this will be a really useful tool in this very, very niche sector. If you do polarized synchrotron radiative transfer calculations, try neurosynchro today!

Questions or comments? For better or worse this website isn’t interactive, so send me an email or, uh, Toot me.

To get notified of new posts, try subscribing to my lightweight newsletter or my RSS/Atom feed. No thirsty influencering — you get alerts about what I’m writing; I get warm fuzzies from knowing that someone’s reading!

Later: Creating Interactive Figures with the New AAS Timeseries Tool

See a list of all posts.

View the revision history of this page.