As a programmer, what is being asked is virtually impossible.
The three main languages used to design websites and optimize content on websites, like HP, include solutions for scraping. Scraping is nearly identical to what Goo, Bing, Yahoo, Facebook Open Graph, etc do to display a rich snippet. Apart from a few minor differences, there is really no way to stop scraping.
However, it is not entirely impossible. The well known Panda and Penguin solutions were designed and implemented to stop content farmers who temporarily -or long-term- scrape and display portions of pages on multiple sites, eating up page positions in SERP and black-hat links, usually with ads attached to them. But humans are smarter than algorithms.
The solution which exists, but is not widely accepted/recommended, because it would literally drop thousands upon thousands of pages from the system, since the traditional meta-tag system has been depreciated, is PDF, which blocks "reading" of the content itself. With it, also blocks several other enhancements.
If this fellow is a pro scraper, he will undoubtedly switch domains every 60-90 days to avoid the spider crawl and is possibly using a Twitter like API to pull content.
Interim, it is best to authenticate content in-page, and report black-hat and scraping domains to the search engines.
James.