Putting out a script runnable by everybody might put too much strain on Springer's servers (and their generosity). I'll give some steps that the motivated can follow instead.
1. Extract a list of all a.btn links from the OP page (382 total).
2. Issue a HEAD request and follow redirects for each link to get the actual page URL (I used curl for this).
3. The page URL contains the book ID. Compare a sample URL's format with its PDF and ePub links. Each page URL can be easily munged into both (sed here).
4. The download pages are captcha-protected, but there's a <noscript> escape hatch in the source. Find it and pass the appropriate data in a POST request to the final file URLs (wget).
1. Extract a list of all a.btn links from the OP page (382 total).
2. Issue a HEAD request and follow redirects for each link to get the actual page URL (I used curl for this).
3. The page URL contains the book ID. Compare a sample URL's format with its PDF and ePub links. Each page URL can be easily munged into both (sed here).
4. The download pages are captcha-protected, but there's a <noscript> escape hatch in the source. Find it and pass the appropriate data in a POST request to the final file URLs (wget).