# "undo" the young_project_tensor...

Hi,

As it is known that the "young_project_tensor" command is to expand the tensor expression in some basis by using their young tableaux symmetry. This is powerful if one wants to compare different expressions or to show some identities such as the Bianchi identity for Riemman tensor.

But I often encounter the situation that I need to rewrite those expansions in basis back to the original tensors, or some more compact forms. I think I am kind of asking the question whether there is a reverse command to undo the "young_project_tensor".

An example to explain my question is as following:

ex:= R_{a b c d} R_{a c e f} \gamma_{b d e f} + \frac{1}{2} R_{a b c d} R_{a b e f} \gamma_{c d e f};
young_project_tensor(_, modulo_monoterm=True)
distribute(_)
canonicalise(_)
rename_dummies(_);


it gives me

\frac{8}{9} R_{a b c d} R_{a c e f} \gamma_{b d e f} + \frac{4}{9} R_{a b c d} R_{a e c f} + \frac{4}{9} R_{a b c d} R_{a b e f} \gamma_{c d e f}


I know that this is nothing but

R_{a b c d} R_{a b e f} \gamma_{c d e f}


This is the easiest part in my computations, so I recognize it immediately.

But is there a way to directly convert the three-term expression to the compact one-term result?

Best,
Yi

edited

+1 vote

Yes, you can do this with the new meld which Dom Price implemented. This combines terms into a minimal number. Full example:

{a,b,c,d,e,f,g,h}::Indices;
R_{a b c d}::RiemannTensor;
\gamma{#}::AntiSymmetric;
ex:= R_{a b c d} R_{a c e f} \gamma_{b d e f} + \frac{1}{2} R_{a b c d} R_{a b e f} \gamma_{c d e f};
young_project_tensor(_, modulo_monoterm=True)
distribute(_)
canonicalise(_)
rename_dummies(_);

meld(_);


This gives

2 R_{a b c d} R_{a c e f} \gamma_{b d e f}


which is not identical to the expression you gave, but equivalent. To see that, just meld the difference:

tst:= R_{a b c d} R_{a b e f} \gamma_{c d e f} - @(ex);
meld(_);


which gives 0 as expected.

You can of course do that meld also on the original expression before the young_project_tensor; the result is the same.

There's proper documentation and a paper coming up on this algorithm shortly.