There are some risks that must never be concealed, such as physical risk, severe emotional distress and discomfort. For example, a subject volunteering for a sleep deprivation experiment must be informed, rather than ordered to participate, as happened within the military.
Any deception should be revealed as soon as possible, and certainly no later than the conclusion of the experiment.
These have been set in stone for many years, but have recently been adapted. A subject signs an initial consent form but, after the experiment has been explained at the end, they sign a second and can ask for their contribution and records to be destroyed. Part of this is due to various Data Protection acts, but the rise of reality TV has also contributed to a strengthening of ethical controls and consent.
The 'Big Brother' TV series and many other alleged 'psychological experiments' provided an initial informed consent policy form, but it could be argued that the participants did not realize that the program is subjected to selective editing, portraying them in the worst possible light.
Under the adjusted guidelines, a contestant could ask for their contribution to be removed, which would jeopardize the whole filming process. Of course, there are many other reasons why these programs cannot be classed as a psychological experiment, but ethical concerns and consent are an issue that may hit the courtrooms soon.
Modern belief is that except in extreme circumstances, a subject should be informed about the dangers before the experiment. At the very least, they should have been given an informed consent policy at the end and have the opportunity to have any data about them destroyed.
In this example, Milgram's experiment is a grey area, because the subjects were informed straight after the study. If a researcher designed a similar experiment today, it would be unlikely to pass ethical considerations.
At the time of experimentation, when the world was trying to understand if 'Just obeying orders,' was a viable excuse for the worst excesses of the Nazi regime, then it could be argued that the usefulness of the results outweighed the distress caused.
The morale of this is that any experiment has to be judged upon a case-by-case basis. The Stanford Prison Experiment did have as much consent as possible, but the failing in this case was the failure of the researcher to halt the experiment when distress was observed.
For example, Zimbardo, the head researcher in the experiment, should have remained outside the study and pulled the plug, rather than allowing himself to be drawn in to the psychological morass.
The Tuskegee Study had no justification, and the researchers not only failed to inform, but also broke their medical commitment to the Hippocratic Oath, failing to preserve life. This is a much more serious issue than consent, and is no different from the notorious Nazi experiments.
No ethical code or informed consent policy can be perfect, and there will always be situations where the experimenter caused too much distress.
On the other hand, there may be times when the experiment lost some validity due to informed consent. This is a problem with no correct answer, although modern techniques err on the side of caution.
Very few peer reviewers will be too strict upon questioning the validity of experiment because of ethical caution.
Instead, reviewers understand that error arising from this has to be incorporated into the interpretation of the results. This is something to which science has to work around and adapt.