The algorithms used are as many as the experimental setups times the detectors used in the setups. They are built to fit the detectors and not the other way around.
The common aspects are a few
1)charged particles interact with matter ionizing it and one builds detectors where the passage of an ionizing particle can be recorded. It can be a bubble chamber, a Time Projection Chamber, a vertex detector ( of which there exist various types).These are used in conjunction with strong magnetic fields and the bending of the tracks gives the momentum of the charged particle.
2)Neutral particles are either
a)photons, and the electromagnetic calorimeters measure them.
b) hadronic, i.e. interact with matter, and hadronic calorimeters are designed so as to measure the energy of these neutrals
c) weakly interacting, as neutrinos, which can only be detected by measuring all the energy and momenta in the event finding the missing energy and momentum.
In addition there are the muon detectors, charged tracks that go through meters of matter without interacting except electromagnetically and the outside detectors are designed to catch them.
The complexity of the LHC detectors requires these enormous collaborations of 3000 people working on one goal : getting physics data out of the system. Algorithms are a necessary part of this chain and are made to order using the basic physics concepts that drive the detectors.
As Curiousone says in order to understand the algorithms entering in the data reduction from these detectors a lot of elbow grease is needed. Certainly they are custom made.
This post imported from StackExchange Physics at 2014-08-12 09:35 (UCT), posted by SE-user anna v