In benchmarking, the term "leading indicator" is used to denote a criterion or indicator which has a causal effect on other indicators,which will show up next time the benchmarking process is run. For example, an indicator of "IT expenditure per student FTE" would be expected to have a positive impact on other indicators such as "reliability" and "performance" - whether there would be a causal effect on softer indicators such as "student satisfaction" is a much debated topic.
For broader indicators such as many (but not all) of the criteria in Pick&Mix or eMM, it is much less clear that such terms as leading and lagging make much sense. There are causal relationships but they are at the process level, picked up hopefully by scores on criteria, not usually directly by scores on narrower indicators (though in some cases they will). Thus an increased score on "e-learning strategy" is likely to lead to higher scores on the broad criteria "pedagogy" and "training", as well as on the narrower criteria of "adoption" and "reliability". This observation is consistent with the general critiques of "management by numbers".