I think it's best to stick with Popper's definition of a good scientific theory: it needs to make falsifiable predictions and have them not falsified by future experimental data.
General Relativity, e.g., passed this with flying colours (perihelion of Mercury, atomic clocks on airplanes etc). From what I've heard, string theory, e.g., doesn't.
An interesting example of where it gets tricky is the neutrino. Iirc it was postulated to explain an energy deficit in a nuclear reaction. But then people were able to use it to predict other experimental results. Is dark matter like that? Cosmology is really tough from this point of view.
You seem to be suggesting the need for at least 2 different phenomenona to be explained by a model.
You cannot falsify DE since DE it's not a model by iteself, but an integral part if one. You can falsify the whole model though.
The most likely scenario is not that DE is disproven but that what we know about it improves. Just like the neutrino was initially just a momentum deficit.
Don't think it's the number (2, or > 1) that matters, it's that there are things not yet tested, that it successfully predicts (and that other models/theories fail to predict).