The difference is that content validity is carefully evaluated, whereas face validity is a more general measure and the subjects often have input.
An example could be, after a group of students sat a test, you asked for feedback, specifically if they thought that the test was a good one. This enables refinements for the next research project and adds another dimension to establishing validity.
Face validity is classed as 'weak evidence' supporting construct validity, but that does not mean that it is incorrect, only that caution is necessary.
For example, imagine a research paper about Global Warming. A layperson could read through it and think that it was a solid experiment, highlighting the processes behind Global Warming.
On the other hand, a distinguished climatology professor could read through it and find the paper, and the reasoning behind the techniques, to be very poor.
This example shows the importance of face validity as useful filter for eliminating shoddy research from the field of science, through peer review.
If Face Validity is so Weak, Why is it Used?
Especially in the social and educational sciences, it is very difficult to measure the content validity of a research program.
Often, there are so many interlinked factors that it is practically impossible to account for them all. Many researchers send their plans to a group of leading experts in the field, asking them if they think that it is a good and representative program.
This face validity should be good enough to withstand scrutiny and helps a researcher to find potential flaws before they waste a lot of time and money.
In the social sciences, it is very difficult to apply the scientific method, so experience and judgment are valued assets.
Before any physical scientists think that this has nothing to do with their more quantifiable approach, face validity is something that pretty much every scientist uses.