Only allow Google and Bing bots to crawl a site

The last record (started by User-agent: *) will be followed by all polite bots that don’t identify themselves as "googlebot", "google", "bingbot" or "bing".
And yes, it means that they are not allowed to crawl anything.

You might want to omit the * in /bedven/bedrijf/*.
In the original robots.txt specification, * has no special meaning, it’s just a character like any other. So it would only disallow crawling of pages that literally have the character * in their URL.
While Google doesn’t follow the robots.txt specification in that regard, because they use * as a wildcard for "any sequence of characters", it’s not needed for them in this case: /bedven/bedrijf/* and /bedven/bedrijf/ would mean exactly the same: block all URLs whose path begins with /bedven/bedrijf/.

And finally, you could reduce your robots.txt to two records, because a record can have multiple User-agent lines:

User-agent: googlebot
User-agent: google
User-agent: bingbot
User-agent: bing
Disallow: /bedven/bedrijf/
Crawl-delay: 10

User-agent: *
Disallow: /