This task is more complex than the task to solve a quadratic equation, for example, and one must master a significant portion of a textbook – such as Georgi's textbook – and perhaps something beyond it to have everything he needs.

For the 8-dimensional representation of $SU(3)$, things simplify because it's the "adjoint rep" of $SU(3)$ – the vector space that formally coincides with the Lie algebra itself. And the action of the generator $G_i$ on the basis vector $V_j=G_j$ of the adjoint representation is given by
$$ G_i (V_j) = [G_i,V_j]= \sum_k f_{ij}{}^k G_k $$
This implies that the structure constants $f$ directly encode the matrix elements of the generator $G_i$ with respect to the adjoint representation – $j$ and $k$ label the row and the column, respectively.

The structure constants $f$ determining the commutators may be extracted from all the roots. The whole mathematical structure is beautiful but the decomposition of the generators under the Cartan subalgebra has several pieces, and therefore an even greater number of different types of "pairs of pieces" that appear as the commutators.

Some ($r$, rank) of the generators $G_i$ are identified with the Cartan generators $u_a$. The rest of the generators $G_j$ are uniquely associated with all the roots.

If you only have the Cartan matrix, you effectively have the inner products of the simple roots only. You first need to get all the roots, and those are connected with the $d-r$ (dimension minus rank) root vectors $r_j$.

The commutators of two Cartan generators vanish,
$$[h_i,h_j]=0$$
The commutator of a Cartan generator with a non-Cartan generator is given by
$$[h_i,G_{r(j)}] = r_i G_{r(j)}$$
because we organized the non-Cartan generators as simultaneous eigenstates under all the Cartan generators. Finally, the commutator
$$[G_{r(i)},G_{r(j)}]$$
is zero if $r_i=r_j$. It is a natural linear combination of the $h_i$ generators if the root vectors obey $r_i=-r_j$. If $r_i+r_j$ is a vector that isn't a root vector, the commutator has to vanish. And if $r_i+r_j$ is a root vector but $r_i\neq \pm r_j$, then the commutator is proportional to $G_{r(i)+r(j)}$ corresponding to this "sum" root vector. The coefficient (mostly sign) in front of this commutator is subtle.

Once you have all these commutators, you have effectively restored all the structure constants $f$, and therefore all the matrix entries with respect to the adjoint representation.

To find matrix elements for a general representation is much more complex. You must first figure out what the representations are. Typically, you want to start with the fundamental (and/or antifundamental) rep, and all others may be obtained as terms composing a direct sum decomposition of tensor products of many copies of the fundamental (and/or antifundamental, if it is different) representation.

All the representations may be obtained from the weight lattice which is a subset of the root lattice and is similar. In fact, the weight lattice is the dual (in the "form" vector space sense) of the root lattice under the natural inner product.

In practice, physicists don't ever do the procedures in this order because that's not how Nature asks us to solve problems. We learn how to deal with the groups we need – which, at some moment, includes all the compact simple Lie groups as the "core" (special unitary, orthogonal, symplectic, and five exceptional), and we learn the reps of these Lie groups – the obvious fundamental ones, the adjoint, and the pattern how to get the more complicated ones.

I am afraid that it doesn't make any sense to fill in any "gaps" if you would need to elaborate upon something because in this way, one would gradually be forced to write another textbook on Lie groups and representation theory as this answer, and I don't think that such a work would be appropriately rewarded – even this work wasn't. ;-)

This post imported from StackExchange Physics at 2015-11-01 21:04 (UTC), posted by SE-user Luboš Motl