Like all 'process', this is something which needs to be defined and documented (and followed) by your organization, based on what is most effective and practical for your circumstances.
There are 2 degrees of test case review. 'Level 1' is a high level specification. This is a sentence or paragraph which specifies WHAT each test case will test. It is critical that these are reviewed by the people you are running the test for, so they can ensure that what you will be testing is valid, and provides 'complete' coverage. It can also be used to minimize duplication between various levels of test This review can be part of the test plan review or a separate review, but it must be formal, documented, and signed off on by all parties when the review is successfully completed. In the 'old days' this was done via paper copies and 'red pencil'; nowadays an online 'eReview' tool is much more effective since everybody can see everybody else's comments and responses 'up front'.
Level 2 is HOW each test will be done. This is more of a gray area, and does not have to be as formal as the Level 1 review. Your choices include:
> No review - Not recommended, but not necessarily a disaster if the tester is both good and versatile.
> Self review - better, but still not ideal. If you had a 'blind spot' when you wrote it, the odds are good you'll still have that blind spot when you review it.
> Peer review - This is often the most practical. Having other testers look it over helps spot holes in translating Level 1 design to Level 2, and invalid steps.
> 'Customer' review - That is, review by the people you are doing the testing for. This can provide some good feedback, but by itself, it may not be the best choice unless those people are experienced testers. Doing this review in conjunction with 'Peer review' is the most effective option, but often is not practical due to the costs in personnel and time.