Defending Your Life (Part 3)

This is the last part of my attempt to explain our simulation software. You can read Part 1, about event generators, and Part 2, about detector simulation, if you want to catch up. Just as a reminder, we’re trying to help our theorist friend by searching for his proposed “meons” in our data. The detector simulation gives us a long list of energy deposits, times, and locations in our detector. The job isn’t done though. Now we have to take those energy deposits and turn them into something that looks like our data – which is pretty tricky! The code that does that is called “digitization”, and it has to be written specially for our detector (CMS has their own too).

The simple idea is to change the energies into whatever it is that the detector reads out – usually times, voltages, and currents, for example, but it can be different for each type of detector. We have to build in all the detector effects that we care about. Some are well known, but not well understood (Birk’s law, for example). Some are a little complicated, like the change in light collected from a scintillator tile in the calorimeter depending on whether the energy is deposited right in the middle or on the edge. We can use the digitization to model some of the very low-energy physics that we don’t want to have to simulate in detail with Geant4 but want to get right on average. Those are effects like the spread and collection of charge in a silicon module or the drift of ionized gas towards a wire at low voltage.

Z to mu mu at high pileup

One of our events with lots of “pile-up” – many standard proton-proton collisions, one dot for each, on top of one that we’re interested in (the one with the yellow tracks)

Digitization is where some other effects are put in, like “pile-up“, which is what we call the extra proton-proton collisions in a single bunch crossing. Those we usually pre-simulate and add on top of our important signal (meon) events, like using a big library. We can add other background effects if we want to, like cosmic rays crossing the detector, or proton collisions with remnant gas particles floating around in the beampipe, or muons running down parallel to the beamline from protons that hit collimators upstream. Those sorts of things don’t happen every time protons collide, but we sometimes want to study how they look in the detector too.

Now we should have something that looks a lot like our data – except we know exactly what it is, without any ambiguity! With that, we can try to figure out if our friend’s meons are a part of nature. We can build up some simulated data that includes all the different processes that we already know exist in nature, like the production of top quarks, W bosons, Z bosons, and our new Higgs bosons. And we can build another set that has all of those things, but also includes our friend’s meons. The last part, which is really what our data analysis is all about, is trying to figure out what makes events with meons special – different from the other ones we expect to see – and trying to isolate them from the others. We can look at the reconstructed energy in the event, the number of particles we find, any oddities like heavy particles decaying away from the collision point – anything that helps. And we have to know a little bit about the simulation, so that we don’t end up using properties of the events that are very hard to get right in the simulation to separate meons from other particles. That really is the first part of almost all our data analyses. And the last part of most of our analyses (we hope), is “unblinding”, where we finally check the data that has all the features we want – passes all our requirements – and see whether it looks more like nature with or without meons. Sometimes we try to use “data-driven methods” to estimate the backgrounds (or tweak the estimates from our simulation), but almost every time we start from the simulation itself.

Some of our data with a few different guesses as to what new physics might look like (here different dark matter models). The data look more like our expectation without them, though – so no new physics today!

The usual thing that we find is that our friend told us about his theory, and we looked for it and didn’t find anything exciting. But by the time we get back, our theorist friends often say “well, I’ve been thinking more, and actually there is this thing that we could change in our model.” So they give you a new version of the meon theory, but this time instead of being just one model, it’s a whole set of models that could exist in nature, and you have to figure out whether any of them are right. We’re just going through this process for Supersymmetry, trying to think of thousands of different versions of Supersymmetry that we could look for and either find or exclude. Often, for that, you want something called a “fast simulation.”

To make a fast simulation, we either go top-down or bottom-up. The top-down approach means that we look at what the slowest part of our simulation is (always the calorimeters) and find ways to make it much, much faster, usually by parameterizing the response instead of using Geant4. The bottom-up approach means that we try to skip detector simulation and digitization all together and go straight to the final things that we would have reconstructed (electrons, muons, jets, missing transverse momentum). There are even public fast simulations like DELPHES and the Pretty Good Simulation that theorists often use to try to find out what we’ll see when we simulate their models. Of course, the faster the simulation, normally, the fewer details and oddities can be included, and so the less well it models our data (“less well” doesn’t have to be “not good enough”, though!). We have a whole bunch of simulation types that we try to use for different purposes. The really fast simulations are great for quickly checking out how analyses might work, or for checking out what they might look like in future versions of our detector in five or ten years.

So that’s just about it – why we really, really badly need the simulation, and how each part of it works. I hope you found it a helpful and interesting read! Or at least, I hope you’re convinced that the simulation is important to us here at the LHC.

ZachMarshall Zach Marshall is a Divisional Fellow at the Lawrence Berkeley National Laboratory in California. His research is focused on searches for supersymmetry and jet physics, with a significant amount of time spent working on software and trying to help students with physics and life in ATLAS.

Defending Your Life (Part 2)

I’ve been working on our simulation software for a long time, and I’m often asked “what on earth is that?” This is my attempt to help you love simulation as much as I do. This is a follow up to Part 1, which told you all about the first step of good simulation software, called “event generation”. In that step, we had software that gave us a list of stable particles that our detector might be able to see. And we’re trying to find some “meons” that our friend the theorist dreamed up.

One little problem with those wonderful event generators is that they don’t know anything about our experiment, ATLAS. We need a different piece of software to take those particles and move them through the detector one by one, helping model the detector’s response to each one of the particles as it goes. There are a few pieces of software that can do that, but the one that we use most is called Geant4. Geant4 is publicly available, and is described as a “toolkit” on their webpage. What that means is that it knows about basic concepts, but it doesn’t do specifics. Like building a giant lego house out of a bag of bricks, you have to figure out what fits where, and often throw out things that don’t fit.

One of the detector layouts that we simulate

The first part of a good detector simulation is the detector description. Every piece of the detector has to be put together, with the right material assigned to each. We have a detector description with over five million (!) volumes and about 400 different materials (from Xenon to Argon to Air to Aerogel and Kapton Cable). There are a few heroes of ATLAS who spend a lot of time taking technical drawings (and photographs, because the technical drawings aren’t always right!) of the detector and translating them into something Geant4 can use. You can’t put every wire and pipe in – the simulation would take an eternity! – so you have to find shortcuts sometimes. It’s a painstaking process that’s still ongoing today. We continuously refine and improve our description, adding pieces that weren’t important at the beginning several years ago but are starting to be important now (like polyboron neutron shielding in our forward region; few people thought early on that we would be able to model low-energy neutron flux in our detector with Geant4, because it’s really complex nuclear physics, but we’re getting so close to being able to do so that we’ve gone back to re-check that our materials’ neutron capture properties are correct). And sometimes we go back and revise things that were done approximately in the beginning because we think we can do better. This part also involves making a detailed magnetic field map. We can’t measure the field everywhere in the detector (like deep in the middle of the calorimeter), and it takes too much time to constantly simulate the currents flowing through the magnets and their effect on the particles moving through the detector, so we do that simulation once and save the magnetic field that results.

A simulated black hole event. But what do meons look like?

Next is a good set of physics models. Geant4 has a whole lot of them that you can use and (fortunately!) they have a default that works pretty well for us. Those physics models describe each process (the photoelectric effect, Compton scattering, bremsstrahlung, ionization, multiple scattering, decays, nuclear interactions, etc) for each particle. Some are very, very complicated, as you can probably imagine. You have to choose, at this point, what physics you’re interested in. Geant4 can be used for simulation of space, simulation of cells and DNA, and simulations of radioactive environments. If we used the most precise models for everything, our simulation would never finish running! Instead, we take the fastest model whose results we can’t really distinguish from the most detailed models. That is, we turn off everything that we don’t really notice in our detector anyway. Sometimes we don’t get that right and have to go back and adjust things further – but usually we’ve erred on the side of a slower, more accurate simulation.

The last part is to “teach” Geant4 what you want to save. All Geant4 cares about is particles and materials – it doesn’t inherently know the difference between some silicon that is a part of a computer chip somewhere in the detector and the silicon that makes up the sensors in much of our inner detector. So we have to say “these are the parts of the detector that we care about most” (called “sensitive” detectors). There are a lot of technical tricks to optimizing the storage, but in the end we want to write files with all the little energy deposits that Geant4 has made, their time and location – and sometimes information (that we call “truth”) about what really happened in the simulation, so later we can find out how good our reconstruction software was at correctly identifying photons and their conversions into electron-positron pairs, for example.

The fun part of working on the simulation software is that you have to learn everything about the experiment. You have to know how much time after the interaction every piece of the detector is sensitive, so that you can avoid wasting time simulating particles long after that time. You get to learn when things were installed incorrectly or are misaligned, because you need those effects in the simulation. When people want to upgrade a part of the detector, you have to learn what they have in mind, and then (often) help them think of things they haven’t dealt with yet that might affect other parts of the detector (like cabling behind their detector, which we often have to think hard about). You also have to know about the physics that each detector is sensitive to, what approximations are reasonable, and what approximations you’re already making that they might need to check on.

That also brings us back to our friend’s meons. If they decay very quickly into Standard Model particles, then the event generator will do all the hard work. But if they stick around long enough to interact with the detector, then we have to ask our friend for a lot more information, like how they interact with different materials. For some funny theoretical particles like magnetic monopoles, R-hadrons, and stable charginos, we have to write our own Geant4 physics modules, with a lot of help from theorists.

The detector simulation is a great piece of software to work on – but that’s not the end of it! After the simulation comes the final step, “digitization”, which I’ll talk about next time – and we’ll find out the fate of our buddy’s meon theory.

ZachMarshall Zach Marshall is a Divisional Fellow at the Lawrence Berkeley National Laboratory in California. His research is focused on searches for supersymmetry and jet physics, with a significant amount of time spent working on software and trying to help students with physics and life in ATLAS.

Doing Physics in Vietnam

Beach next to the conference center

One of the perks of working in our field is the opportunities we get to go to exotic places for conferences. I always felt the HEP-MAD conference in Madagascar would top this list, but the one some of us went to in Vietnam can’t be too far behind.

The Rencontres du Vietnam conference series has been organised in the coastal town of Quy Nhon since 2011, covering different physics topics. This year, one of them was titled Physics at the LHC and Beyond, where I had the privilege of presenting ATLAS soft QCD results.

There were talks covering all aspects of LHC physics, a dedicated session on detector performance with ATLAS and CMS speakers going one after the other, and intense discussion on future colliders. Nobel Laureate Francois Englert was the guest of honour at the conference, and he talked about the history of the Brout-Englert-Higgs mechanism.

Charming hilltop temple

The conference was held at the relatively new International Center of Interdisciplinary Science Education (ICISE), a beautiful facility right by the sea, with its own beach. The food was amazing too – with extensive buffets for breakfast (with fried rice and noodles no less!), lunch and dinner. At the conference dinner, we even got green coconuts filled with water. We were also taken to hill-top Cham temples, and saw local dance/martial arts performances.

Jean Trân Thanh Vân, who is the founder of the renowned Rencontres de Moriond conference series, deserves a big thanks for organising this conference in Vietnam – which surely helps in making particle physics popular in south-east Asia.


Deepak Kar is a research associate in Glasgow. He is involved in soft-QCD measurements, Monte-Carlo tuning, and jet substructure studies.

Defending Your Life (Part 1)

Eur. Phys. J C Cover

Our ATLAS Simulation Paper

Having spent many hours working on the simulation software in ATLAS, I thought this would be a good place to explain what on earth that is (H/T to Al Brooks for the title). Our experiment wouldn’t run without the simulation, and yet there are few people who really understand it. So that I don’t have to grossly over-simplify, I’ll try to make this a three-part post. Our “simulation” runs in three steps, so it seemed only appropriate. If you want to read a lot more, albeit a bit of a technical description, you can try the ATLAS simulation paper that we wrote a few years ago (ATLAS’s first cover article!).

Say your friend comes up with a new theory about how the world works. “This theory we have now (the Standard Model) is pretty good,” your friend says, “but it would work way better if we added some meons. See, if we just add meons, we can explain so much more! And if I’m right, then the LHC is making meons every minute!!” You, a member of ATLAS, think your friend is nuts – and you don’t particularly like the name “meons” – so you decide you are going to prove that he is wrong (or, if he’s lucky, help him win the Nobel Prize!).

Unfortunately, you can’t just demand that ATLAS find meons – we wouldn’t know what we were looking for!! So we need to know what reality would look like if there were meons, and if there weren’t meons. Then we can check which one of those our data looks like, and we can say (with some confidence) “Yes, there probably are meons,” or “No, there probably aren’t meons.” In steps simulation, your new hero!

The first step to any good simulation is called “event generation.”

The LHC collides protons. The theory that describes what happens when those protons collide is called Quantum Chromodynamics. It describes how the quarks and gluons inside the protons scatter off one another, how they might create new particles, and how those new particles behave after they’ve been created. Needless to say, it’s really complicated. In fact, for various reasons, there are some things that you just can’t calculate in the theory. There are many event generators, and many are set up to do only one particular thing very, very well. Some of them are wonderfully generic, but often you have to string them together to get a full description of a single collision between two protons at the LHC. FeynRules + MadGraph5 + Pythia8 + EvtGen is one of my favorite combinations lately. FeynRules is particularly cool, because it lets a theorist write down the Lagrangian of their theory in Mathematica (which is a piece of software that almost all theorists use), and then it translates it into rules for Feynman Diagrams, and from there into code that an experimentalist can use.

A

The Sherpa authors make nice simplified pictures like these of their events. The real things look like that, but a few hundred times more complicated!

These event generators give you a list of all the particles that come out of a collision between two protons. If you read these blogs, you’ve probably seen notes about how particles interact with the detector and decay on their own sometimes, so what you really need is a list of particles that stick around for at least one hundredth of a billionth of a second or so. Those are “stable” enough that we should worry about their interacting with the detector. The event generators usually do rather complicated things to get the number of particles of each type right, but they are kind enough to leave us with a record of what they’ve done, like the one in the little picture on the left there. Inside of that record you can often find the original top quarks, or meons in our case, and see whether they decayed, what they decayed into, and what observable particles were produced from them. In fact, it’s a numerical model of a quantum mechanical process, so even we physicists have to remind ourselves not to cheat and look at the internal record from the event generator too often – we should be able to tell a Higgs boson from a top quark from an meon only by looking at the final particles that we can observe in our detector (pions, electrons, muons and so on).

So, to massively oversimplify things, all you do is tell your friend “write down how your meons interact in a program you like”, and after a little bit of work (shortest turn-around I’ve managed is several months), you can have a complete description of all the particles that come out of a collision between two protons when an meon is produced. Now you’re ready for the next step, detector simulation, which will be our next post.

If you are excited about it, you can try running event generation yourself. All the software is publicly available. I would recommend trying to download MadGraph5. Those guys are good at interfaces, and it has cute modules to let you try out various things. It’ll be a little bit of a struggle to understand everything that’s going on if you aren’t a particle physicist, but you can make a lot of pretty cool pictures if you are willing to spend a little time (and it really is doing event generation just like we do!).


ZachMarshall Zach Marshall is a Divisional Fellow at the Lawrence Berkeley National Laboratory in California. His research is focused on searches for supersymmetry and jet physics, with a significant amount of time spent working on software and trying to help students with physics and life in ATLAS.

Identity problems

An obligatory eye scan is required for all ATLAS underground personnel entering the experimental cavern. The iris recognition is performed by the IrisID iCAM7000.

michael vs. cerebrus

Gate to the underworld

Its only point in life is to keep track of who enters and leaves the Zone. It sounds like a simple task for such an advanced technology, but — like most things in the world of research — it’s never without some hiccups.

The iCAM7000 comes complete with an interactive voice feedback system, personified by a sassy, but simplistic, guard-woman who I liken to the Cerberus of the ATLAS cavern. There exists one possible outcome for each of her heads: 1) she allows you to proceed into the underworld, opening the forward door; 2) she sends you back to where you came from, opening the backward door; or 3) she allows you to proceed, but the forward door remains closed and the backward door opens instead. The particular failure mechanism behind the latter, seemingly contradictory, case has yet to be understood and is best discussed in the appropriate forum. In the middle case, a robotic voice greets you:

 

“Soarry, we cannot confirm your identity.”

 

I end up hearing this way more often than you might expect. So often, in fact, that the sound of her drawled ‘soarry’ now produces an instantaneous Pavlovian response of frustration and rejection in me. Keep in mind that every emotion is amplified by a factor of 10 when you’re 100 meters below ground.

Sometimes the IR scanner positioned at the entrance to the capsule decides it doesn’t like something about you (e.g. your height, your weight, your mood) or the way you entered (e.g. too quick, too slow, with too much hip). One handy trick is to take your helmet off and start from scratch. The general consensus here is that it confuses the straps with a second person entering the capsule. But of course this is only conjecture, as there is never any useful debugging output, only:

 

“Soarry, we cannot confirm your identity.”

 

Moreover, it’s not entirely clear what she means by that. I find it amusing to think that the problem could simply be with my identity itself.

Identity searching

Identity searching

It’s true I’ve been doing a lot of soul-searching lately. Don’t get me wrong, I love testing cables for ATLAS — it’s humbling to be a small part of something grandiose. But lately, my knees have been taking a real beating at PP2 due to multiple bangs against various steel support structures and long hours of kneeling on the anti-skid aluminum planks. Maybe I’m getting too old for this and she’s finally on to me.

Come to think of it, I wonder if ATLAS stores an identity database of all their underground staff. My thoughts begin to wander off into an Orwellian nightmare starring our favorite iCAM7000 as ‘Big Sister’ …

Looking into the iris scanner, there’s an orange dot that turns green when it is properly aligned between the eyes and your head is at an appropriate distance from the scanner. A little back, a little forwards. Fortunately, some verbal guidance is given here, albeit rather spasmodically:

 

“Please move a little back from the camera.”

 

Steadfastly watching the dot while centering it on my forehead always feels a little like being a sniper on a rooftop waiting for that perfect shot. Except in this case, I’m both the target and the assassin. The narrative twist sends my thoughts spiralling out of control.

sniper vs. target

Looking into the abyss

In 300 years, what will they think when they stumble upon this abandoned relic of humanity? Will they conclude it’s some sort of unfinished spaceship, waiting patiently for its first test flight? Will they be accelerating particles we don’t even know exist yet? Will it all seem like a futile exercise or will it be praised as pioneering work that paved the way for current technologies and their understanding of the universe?

Luckily it’s not up to me to decide. For now, I’m just a cable tester:

 

“Thank you, you have been identified.”

 

I rejoice internally as I hear those magical words and see the tunnel passageway open in front of me. And then I suddenly realize that I forgot to pee.

 


Michael Leyton Michael Leyton is Visiting Assistant Professor at the University of Texas at Dallas. He has been a member of the ATLAS collaboration since 2004 and tested over 800 km of cables in the experimental cavern. His favorite cable harnesses are Type-2 VVDC and Type-4 HV.

Photos by Cécile Lapoire

Taking stock at the LHCP conference

I felt like I was returning home as I walked through the gates of Columbia University at 116th Street and Broadway, the day before the LHCP conference began. The scaffolding from the recently completed graduation ceremonies reminded me of my own PhD graduation thirteen years ago. The ubiquitous Columbia-blue signs of “Welcome back Alumni” seemed to be talking just to me. There was some nostalgia for what has changed, most notably the replacement of the tennis courts next to the large brick physics building with an even larger modern glass one.

Prof. Mike Tuts of Columbia University

Prof. Mike Tuts of Columbia University welcoming participants to LHCP 2014.

Jet-lagged from the flight from England, my current home, had me awake at 4 am on the first morning of the conference. Anticipation — because it was going to be my first conference since the boson-discovery conference in Melbourne in 2012 — would not let me return to sleep. LHCP’s kick-off, given by the conference chair and my own former PhD supervisor Professor Mike Tuts, reminded me of a few of the things to look forward to: a public showing of the new Particle Fever movie on Wednesday night and a panel discussion moderated by New York Times’ science writer Dennis Overbye on Friday afternoon.

The conference comes during an early transition for the LHC experiments. While the experimentalists are finalising many measurements from the first set of collisions completed in 2013, they are making significant preparations for the next set of collisions scheduled to begin early next year. The scientific discussions of the sometimes mundane details of the first measurements are sprinkled with giddy soothsaying for what we might discover in the coming years, and how. Continuing into the coffee breaks, the tangible excitement in the ensuing conversations is a highlight of the conference.

My own presentation on measurements of multiple weak-boson production from the ATLAS experiment came on the second day. Jet-lag had not been kind, allowing me a mere three-and-half hours’ sleep the night before, and my only hope for a coherent presentation was to keep a steady stream of coffee pulsing through my veins. This worked only too well — I sped through the results at twice my planned speed, leaving the session chair to comment, “Well, we have plenty of time for questions…”

The morning sessions of the third day focused on what the newly-discovered Higgs boson could be telling us about what lies beyond. The afternoon was open, a break at the midpoint of the conference. For me this meant several hours of catching up on meetings and email. But there was a reward, an early dinner consisting of three things that are hard to find in the UK: fried chicken, a caesar salad, and a Brooklyn beer I’d never heard of. Then it was off to the showing of Particle Fever, where I had volunteered to answer any and all questions the public moviegoers had about physics.

Particle Fever at LHCP

Volunteers answering questions of the public before attending the Particle Fever screening at LHCP 2014.

I found a place next to a poster of the Standard Model and described the particles and interactions as best I could to the small crowd that formed around me. At the end, one of the listeners told me she was involved in the development of the poster — from the presentational side — and she now had a better understanding of what it represented. It’s always nice when someone lets you know you have done a good job.

More than 1000 attendees were packed into the big conference hall on the southwest corner of the Columbia campus to watch ‘Particle Fever’. The movie tells the story of the Higgs boson discovery, focusing on a few individuals who convey the excitement and activity within and outside the big experiments that took the data leading to the discovery. Afterwards the movie’s director and three of its stars — David Kaplan, Nima Arkani-Hamed, and Fabiola Gianotti — answered many questions from the audience. I was impressed by the depth of the public’s questions, cutting to many of the difficult issues that physicists continue to try to answer through their research. It is reassuring that the public finds many of the same questions interesting as we physicists do. This research is truly a universal human endeavour, and this is one of the core themes of the movie.

Another day of scientific results passed and the panel discussion came up after lunch. This focused on the major accelerators the field will need for the next big discoveries over the years and decades to come. While I will have retired before the next big accelerator produces data, I have a responsibility to ensure that the next generation of physicists have the tools to answer the questions that my generation has yet to even ask. The next discovery will lead to even more profound questions than the last — this is the excitement of research, and it will continue well beyond the discovery of the Higgs boson, the latest important milestone on the path to understanding the workings of the universe.


Caterina Doglioni Chris Hays is a Research Lecturer at Oxford University focusing on Higgs boson measurements at ATLAS. He also works on the precise measurement of the W boson mass, which provided an expected mass range for the Higgs boson prior to its discovery. Chris is currently serving as ATLAS UK physics coordinator.

LHCPlanning for the future

As someone who comes from a small mountain town, for many years I’ve linked the word ‘summer’ to ‘seaside’ and ‘sun’. During my experience as a physicist working in ATLAS, I found myself associating the word ‘conferences’ to the word ‘summer’ more often than to the two above. Physicists work hard to meet review deadlines so that their result is made public before the start of the conference, often postponing seaside and sun. The reward is being able to present the work to an international audience: Summer conferences are the showcase of ATLAS results obtained throughout the year.

Even if in my case the results are still in the works, I was invited to chair the Physics Beyond the Standard Model sessions at the LHCP (Large Hadron Collider Physics) conference, at Columbia University in New York. LHCP is a new addition to the Summer conferences calendar and is already a well established appointment even though it is only in its second edition. LHCP combines two of the previous Summer conferences, HCP and PLHC, allowing physicists to economise on acronyms and travel.

During our week at LHCP, the sea was a few kilometres away so it didn’t feature, but we had plenty of sun and plenty of results. Beyond the new ATLAS results on display (as detailed in Kate Shaw’s post), the conference featured an interesting debate on the outlook of LHC physics for the coming years. The discussion between the six panelists (Natalie Roe, Steve Ritz, Hitoshi Murayama, Jerry Blazey, Sergio Bertolucci, Nima Arkani-Hamed) was moderated by NYT journalist Dennis Overbye, and was centred on the perspectives for physics at the LHC and beyond in the next decades.

Overbye Panel

Panel chaired by Dennis Overbye discussing the report of the Particle Physics Project Prioritization Panel (P5).

The immediate, practical question that comes to mind is: why should we start thinking about the future so much in advance? We already have enough to do in ATLAS between completing 8 TeV searches and analyses and preparing for the upcoming 13 TeV run!

Past experience teaches us that planning for accelerators and experiments that require global collaborative efforts need to start much more in advance with respect to the start of operations. The first steps for the Large Hadron Collider happened more than 20 years before the start of the LHC data taking. Even though the concrete plan will certainly be driven by the results obtained at the 13 TeV LHC, now it’s the time to start thinking about a global strategy that calls into action many countries in the world. Something that we scientists often take for granted is how well science works in terms of collaboration between different countries that normally aren’t that fond of each other. Everything seems so effortless when discussing physics problems! Policymakers aren’t as bright-eyed though, and if we want worldwide collaboration we need a robust framework in terms of international relations.

There are still many open questions in terms of targeted physics planning. As Fabiola Gianotti said, one of the most important questions is ‘at which energy scale will we find the answers to the shortcomings of the Standard Model’?

John Ellis's concluding talk at LHCP

John Ellis’s concluding theory talk at LHCP

Many of us still strongly suspect that the energy scale of the LHC gives us a very good starting point to look for answers (see post by Zach Marshall about the SuperSymmetric particles that could be found at the LHC):

However, many of us also know that nature hasn’t been yet so kind as to show us the signatures of the new phenomena that easily, and might not do so even in the upcoming LHC run. In the BSM sessions that I chaired, there was no claim of new physics discovery yet. So we know that we should plan for our current and upcoming searches to welcome unexpected and rare processes as they might help fill the gaps in our understanding of nature.

Theorist Markus Luty's conclusions at the LHCP conference

Theorist Markus Luty’s conclusions at the LHCP conference

Fabiola’s talk also highlighted that our recent successes might lead the way to new physics. The first LHC run pointed us to a new particle that is different to any other particle we know: the Higgs boson. The Higgs boson could be the particle connecting the known Standard Model world to discoveries beyond our current understanding. We must study the properties of the Higgs boson in detail; The choice of whether to do so at a linear accelerator or a large circular one will heat the debates of the next few years.

Fabiola Gianotti's slide at the LHCP conference: Enrico Fermi's extrapolations on future technologies, 1954->1994

Fabiola Gianotti’s slide at the LHCP conference: Enrico Fermi’s extrapolations on future technologies, 1954->1994

Another message from the discussion was that even though resources are limited we don’t want to limit our ambitions. Enrico Fermi was quoted as an example: he mentioned TeV-scale colliders at a time when the techology was still science fiction.

If our vision of particle physics is one of a world-wide coherent research field, collaboration will help think of new technologies needed to make future particle physics research facilities happen. Our efforts should also be targeted towards making those technologies as affordable as possible, since we can’t forget budgets are and always will be limited (let’s not forget that the LHC expenditure needs to be put into perspective). But, as Nima Arkani-Hamed also pointed out, we should keep in mind that science is an investment and the pay-off (for everyone, not just for us scientists) is years from now. The questions we’re pursuing now shape our culture and our world, now and in the years to come, so let’s keep planning in order to answer those questions.


Caterina Doglioni Caterina Doglioni is a post-doctoral researcher in the ATLAS group of the University of Geneva. She got her taste for calorimeters with the Rome Sapienza group in the commissioning of the ECAL at the CMS experiment during her Master’s thesis. She continued her PhD work with the University of Oxford and moved to hadronic calorimeters: she worked on calibrating and measuring hadronic jets with the first ATLAS data. She is still using jets to search for new physics phenomena, while thinking about calorimeters at a new future hadron collider.

Notes from Underground: IBL vs Brazil Championship

More from our Notes from Underground blog series by ATLAS members preparing to explore new worlds that higher energy collisions will reveal in the LHC’s next run

brasil_2014

Previously in Notes from Underground, Dave Robinson wrote in some detail about the work going on inside the ATLAS Detector, and Clara Nellist wrote about the inner detector of ATLAS, discussing the different types of detection units or Sensors (Planars & 3D). I will continue to delve into the exciting world of the inner detector with its brand new Insertable B-Layer (IBL) and its related parts.

Next year the LHC will start running again at 13 TeV, almost double the previous energy, and the protons will be collided together every 25 nano seconds, twice as often as in 2012, thus ATLAS needed a new detector layer nearer to the collision point to help reconstruct the debris of each collision. The ATLAS Detector is big (46m long, 25m diameter), and at first it was difficult to believe there would be available extra space for a new detector but in fact reduction of the diameter of the beam pipe itself was proposed. The IBL was just inserted into ATLAS last month. It was an important and unique goal in that game. Next week during the Football World Cup in Brazil, we will see how 11 players will easily insert two or three balls into the goals in a few tens of minutes, but the insertion of the IBL was indeed a more difficult task.

Ahmed is inside the pit for the pixel services in Jan 2014

Ahmed inside the pit for the pixel services in Jan 2014

The IBL championship began several years ago, when the decision to insert a fourth pixel layer into the newly reduced beam pipe was made, to improve the tracking system and to compensate for the irreparable failures in other layers. The IBL has been installed after a lot of work and technical support from many Captains playing in the ATLAS club, between the existing pixel system and a new smaller radius beam-pipe at a radius of just 3.3 cm. To cope with the high radiation and pixel occupancy due to the proximity to the interaction point, a new read-out chip and a newer version of the planar pixel sensors, and a completely new design called 3D silicon were invited from all over the world to help in this championship. Moreover, a lot of work has been done to improve the physics performance of the detector to make it more efficient.

IBL is suited in the clean room in SR1, it is ready for lowering down to the pit.

IBL is suited in the clean room in SR1, it is ready for lowering into to the pit of the ATLAS Cavern.

The IBL is made of 14 staves. A stave is simply the structure that holds the pixel modules which are the main players in the detection process. There are types of modules on each stave, 3D modules on the ends that Clara discussed last week in her blog, and planar modules covering the central part of the stave.

As detailed health tests for players are needed to make sure only the fittest are accepted as a main team player, detailed tests and selection rules are used to choose the best modules, electronics, services , cables and staves to select the best and the most high quality parts. I worked on this analysis to identify the highest quality modules and staves to be used to build the IBL.

During the IBL lowering down to the pit on May 2014

The IBL being lowered into the pit of the ATLAS cavern.

The IBL was finally assembled and tested and it was installed last month successfully inside the inner detector. Soon the cosmic testing will begin to perform the detector commissioning and calibration.

To finalize this comparison to the World cup in Brazil, the football team performs their training in 30 degree heat wearing their football kit in the muddy fields outside while the IBL team prefers to do their work inside the chilly clean rooms wearing lab coats.


Ahmed Bassalat Ahmed Bassalat is a Palestinian PhD student at LAL, France, working on IBL and planar pixel sensors for ATLAS and the VBF H->invisible analysis channel. He joined Paris sud 11 University in France after getting his Bachelor degree from An-Najah National University in Nablus, Palestine. Ahmed is working to get Palestinian students and Universities more involved in High Energy Physics.

Notes from Underground: Pixel Prototypes

More from our Notes from Underground blog series by ATLAS members preparing to explore new worlds that higher energy collisions will reveal in the LHC’s next run

In last week’s post for this Notes from Underground series, David talked about the work that goes on in the ATLAS pit. I’m going to take a step back and talk about what happens before a detector is installed. Although the work I want to tell you about didn’t technically take place underground, much of it was performed in what is essentially a large airport hanger without natural light, so it certainly feels like you’re 100m down!

Setting up the equipment for an experiment. Photo credit J. Hasi.

Setting up the equipment for an experiment. Photo credit J. Hasi.

My research is focused on the Pixel detector, which lies at the very heart of ATLAS, closest to the point where protons are smashed together (the Interaction Point).

The purpose of the pixel detector is to track charged particles as they travel outwards from the interaction point, allowing us to make measurements of the electrical charge and mass. One method is to see which way they bend in the magnetic field that we surrounded this part of the detector with. This helps us to identify particles. By following these tracks back towards the interaction point, we can work out when one of the particles was a beauty quark (or b-quark). We can tell this as, once the b-quark has been created, it travels a few millimetres before turning into a different particle. Our detector is accurate to about 10s of micrometres (or 0.01 mm) and so we can see when this has happened. Finding out when a b-quark has been made is a very useful piece of information for many of the physics analyses.

One problem is that every time the LHC collides bunches of protons together (40 million times a second as David said last week), it sprays the ATLAS detector with new particles and our pixel detector gets a bit damaged from all the radiation! Imagine you have a row of ducks at a fairground stall and someone’s throwing balls at them; if you hit one, the duck gets moved a bit, or even knocked over. This is what happens when particles (the balls) are travelling through our detector, which is made of a three-dimensional grid of silicon atoms (the ducks). When the atoms are displaced, electrons moving around inside can get trapped (and later released) and this means our measurements are not as good (or can’t happen at all). To be honest, the duck analogy was probably a bit strained here. The main idea is that the detectors get damaged by the radiation, and after a while we have to replace them.

For this current shut-down of the LHC, there wasn’t enough time for us to completely replace the whole pixel detector, so we decided to add an extra layer and insert it between a new beam-pipe and the current inner-most layer. We called it the Insertable B-Layer. This layer had to be faster, last longer and take a sharper ‘image’ of the particles passing through it. Consequently, new pixel sensors had to be designed to be placed in this layer, since it was going to be even closer to the interaction point, and the energy of the protons of the LHC were going to be increased.

When a prototype of a new design has been made, we take it to CERN (or another particle accelerator) and place the prototype into the particle beam there. This is where the large airport hanger comes in. At CERN the particle beams from some of the pre-accelerators for the LHC can be diverted away (when they’re not busy feeding the LHC) to special experimental halls where we can do these sorts of experiments. Even when the accelerators at CERN are off, we can travel to another particle accelerator for these tests.

The crew for an experiment testing prototype pixel detectors at CERN. Photo credit J. Hasi.

The crew for an experiment testing prototype pixel detectors at CERN. Photo credit J. Hasi.

During the experiment we have to make the most of the time available: so data is taken for 24 hours a day! Fortunately shifts are split into three sets of eight-hours, so it’s not so bad. Thankfully, we always work in pairs, so there’s always someone to talk to at 4AM when the data taking is stable (meaning there’s nothing for us to do until we have to change the way our experiment is set up). I’ve watched some terrible films at this time in the shift, because I don’t trust myself to be awake enough to make useful additions to my analysis code. From these experiments, two types of pixel detectors were chosen to go into the IBL: a newer version of the design already in ATLAS, called planar pixel sensors, and a completely new design called 3D silicon.

The next stage was to make enough of these sensors (with some left over in case any get broken in the process), to build this new detector layer and install it. But I’ll leave that to Ahmed for his post next week!


Clara Nellist Clara Nellist is a British post-doc at LAL, France, working on planar pixel sensors for future upgrades of the ATLAS detector and the H->tautau analysis channel. She did her PhD with Manchester University studying 3D silicon pixel detectors for the IBL upgrade and her masters degree in top physics at the DO experiment at FermiLab. Clara is also active in science communication, with an aim to encourage more young women to study physics.

Notes from Underground: Servicing Silicon

Launching our Notes from Underground blog series by ATLAS members preparing to explore new worlds that higher energy collisions will reveal in the LHC’s next run

Engineers deep inside the ATLAS detector. Their location is a few metres directly below the usual point of collisions, and several metres above the cavern floor.

We physicists refer to the vast underground cavern that houses the ATLAS experiment as ‘the pit’. That may be a strange term to use for a marvel of civil, mechanical and electrical engineering, but nonetheless there are parallels to what you might imagine a ‘pit’ to be. Working inside the ATLAS detector in the pit can be dark, sometimes hot and not suited to those with claustrophobia. It often involves climbing several sets of makeshift steps and gantries and crawling flat on your stomach through narrow gaps to get to the part of the detector where you need to be. You will be wearing a safety helmet with mounted lamp, steel toe-cap shoes, one or more dosimeters to monitor radiation exposure and even a harness, if working at heights. Not to mention tools, laptop and any equipment you need to do your job. You tend to recognize the experimental physicists, engineers and technicians who have just come up from the pit – they stand blinking in the sunlight with a tired and rather sweaty appearance.

Getting authorization to work in the pit is no easy ride either. First you need a medical. Then there are safety courses to follow (with tests to pass). You must request access to the various ‘zones’ within the pit. You make a work request to detail the work, its duration, the location, and the number and names of people working with you. And then you fill out a risk assessment. All three of those formalities require approval by safety officers, site managers and project leaders. When that’s done, then finally you can put on your helmet, dosimeter, boots, and use your personal CERN badge (with a chip to identify you) to enter the different access zones, backed up by an iris scan to make sure it’s really you (the access control systems are electronically linked to the approval processes mentioned above). It sounds like a lot of hassle but after the initial shock you tend to take it in stride.

I’ve already mentioned that ‘the pit’ is an engineering marvel. The ATLAS detector is also a marvel of experimental physics. The sheer scale of the technology down there never fails to impress, even if you work there often. You can read the mind-boggling facts about ATLAS in this fact sheet. But the scale is only part of it – the really impressive stuff is the appreciation of what the numerous ‘sub-detectors’ that comprise ATLAS are made of and how they function.

I am the Project Leader of one such sub-detector – the SemiConductor Tracker (SCT) – which is centred around the proton-proton collision point right in the heart of the experiment. The SCT is about 6m long and 1.5m in diameter. Its detecting ‘element’ is a  ~6×6 cm silicon sensor with several hundred micro-strips implanted on its surface. A charged particle passing through the silicon generates electron-hole pairs in its bulk though ionization, and the holes drift towards the micro-strips where they form ‘blips’ of excessive charge. We measure that charge and, because the micro-strips are microscopic (the clue is in the name), we can tell with very high precision exactly where the particle passed through the silicon. And here’s the thing. There are more than 16000 such silicon sensors in the SCT, together comprising about 6 million micro-strips, and we measure the charge on every single one. Our 60 square metres of silicon allow us to measure the trajectories (or ‘tracks’) of the thousands of particles that are generated by each proton-proton collision, and to measure each track with a precision of microns (millionths of a metre).  And it does this small task 40 million times every second, which happens to be the rate at which protons collide head-on in the centre of ATLAS.

The LHC beam operations are on pause for two years, which is why we can work directly on the ATLAS detector in the pit (radiation levels would prevent us from entering the cavern otherwise). Even though we have stopped taking data, there is still plenty to do to safeguard this remarkable detector and prepare it for more data-taking from 2015, which is why I am often in the pit.

The SCT itself remains inaccessible, concealed within other sub-detectors that surround the collision point, like the layers of an onion (as Shrek once said, it’s complex). But the tens of thousands of cables, optical fibres, and cooling circuits that service the SCT are partially exposed. And SCT is just one of many such sub-detectors in ATLAS, each with their own services. A tiny mistake – something as small as a washer or misplaced screw – could provoke an electrical short and the electronic noise arising from that short could prevent us from measuring the tiny amount of charge on the micro-strips. Elaborate detection systems are in place to detect such mistakes instantly. We also have to be vigilant on the issue of the environment around the silicon sensors. During collisions, the SCT is operated cold (-7oC) to minimize the rate of radiation damage to the silicon, so the sensors must be kept very dry (in a nitrogen atmosphere) to prevent condensation or frost, which could destroy the millions of delicate connections to the silicon.

We also have to prepare for data taking again from 2015. When I mentioned earlier that protons collide head-on at the rate of 40 million per second, I neglected to mention that it’s not one proton, but billions. When these billions collide, and are sufficiently focused at the point of collision, chances are that there are many simultaneous collisions from the quarks and gluons from multiple protons. So the tracks we measure from what looks like a massive collision are in fact the superposition of multiple (massive) collisions. The LHC has become much better at this than we originally foresaw – we need to be able to extract even more massive amounts of data, in some cases beyond the existing capabilities of the detector readout systems. This has required significant upgrades to those systems this year.

I’ve touched on just a few of the activities currently underway in the pit but, believe me, this is just the tip of the iceberg. There will be further notes from underground in the coming weeks that will describe more of the work going on right now. Working on ATLAS in the pit raises enormous challenges – technical, scientific and even physical. But the rewards are enormous too, meeting these challenges together with skilled and motivated teams of truly global international engineers and physicists. Right now, I wouldn’t want to work anywhere else.


Dave Robinson Dr Dave Robinson is a Senior Research Physicist at the Cavendish Laboratory, Cambridge University, and at CERN. Since March 2013 he has been the Project Leader of the ATLAS Semiconductor Tracker, and Project Leader of the ATLAS Inner Detector. Among other things, he has worked on triggering, data acquisition and silicon detector design and development for the UA1, OPAL and ATLAS experiments at CERN.