BotDetect ASP.NET CAPTCHA Validation & Security FAQ

This page answers frequently asked questions about BotDetect ASP.NET Captcha validation and security.

Table of Contents

Why can't users correct their Captcha code input when they get it wrong? If they make a mistake, it seems they have to type in a whole new code from a new image.

You are right, and this behavior of the Captcha component is by design. Only one validation attempt is allowed per Captcha code for security purposes.

If we allowed multiple retries on solving the same Captcha, it would significantly lower the Captcha security, since bots could just brute-force different code values until they eventually got it right. Also, it would be much easier to write a bot which used OCR techniques to bypass the Captcha: if, for example, it could recognize two out of the five digits in the image, it would just have to brute-force the remaining three.

So a failed validation attempt (whether on the client- or server-side) always invalidates the current Captcha code. Successful server-side validations also remove the code (so we prevent cases where somebody solves just one Captcha and then keeps reusing it for multiple submissions).

Successful client-side validations (using the ValidationAttemptOrigin.Client value) are the only ones that don't invalidate the current code, so you can also validate the same submitted values on the server-side once all form fields have been filled-out.

So basically, if the Captcha validation attempt wasn't successful, the Captcha image also needs to be reloaded and the previous user input cleared, since the old Captcha code has been invalidated.

I have a form protected with BotDetect Captcha that contains several other validated fields. When a user enters the correct Captcha code but server-side validation of another field fails, they are shown another Captcha image with a different code.

Is there a way to show them the same Captcha image and keep the entered code, so they don't have to solve more than one Captcha just because they entered an invalid value for another field?

Users definitely shouldn't have to solve another Captcha if they enter the correct Captcha code, but (for example) username validation fails. The purpose of Captcha is to ensure the user is human, and once they solve it this purpose is fulfilled.

If you have to return them to the form because another field value needs to be corrected, it's best not to show them the Captcha at all. The simplest way to remember that the user has passed the Captcha test successfully is to store the validation result on the server, and check it before displaying the page to the user. For example:

/// <summary>
/// flag showing the user successfully passed the Captcha test
/// </summary>
protected bool IsHuman
    bool isHuman = false;
      if (null != Session["IsHuman"])
        isHuman = (bool)Session["IsHuman"];
    catch (InvalidCastException) { /* ignore cast errors */ }
    return isHuman;
    Session["IsHuman"] = value;
protected void Page_Load(object sender, EventArgs e)
  // validate the Captcha to check we're not dealing with a bot
  if (!IsHuman)
    string userInput = CaptchaCodeTextBox.Text.Trim().ToUpper();
    IsHuman = ExampleCaptcha.Validate(userInput);
    CaptchaCodeTextBox.Text = null; // clear previous user input

  // TODO: other field validation
  if (Page.IsValid && IsHuman)
    // TODO: the protected code, e.g. account registration

protected void Page_PreRender(object sender, EventArgs e)
  // the Captcha is only rendered if it hasn't already been solved
  if (IsHuman) 
    ExampleCaptcha.Visible = false;
    CaptchaCodeTextBox.Visible = false;

For security reasons, it is not possible to get the same BotDetect Captcha image on two page loads, nor to use the same code for more than one Captcha image.

I want to validate the Captcha on the client side without doing a post back. Do you have any suggestions? Is it possible to retrieve the current Captcha code at Page_Load, and send it to the client?

If you want to avoid full page postbacks, you could take a look at the:

Pure client-side CAPTCHA validation drawbacks

Pure client-side Captcha validation (without any communication with the server) is not supported by BotDetect, since such a Captcha is trivial to bypass, and doesn't provide any serious protection from bots. For example:

  • You want users to post comments only if they have successfully solved the Captcha.
  • If the Captcha validation is purely client-side, this means JavaScript code must send the user's comment to the server when the Captcha code is entered correctly.
  • So the spammer only needs to solve the Captcha once, and note how you handle the result: e.g. sending a specific POST parameter, or redirecting to a specific page.
  • After that, they can simulate the same behavior in their bot and bypass the Captcha completely – by simply faking the POST parameter, or accessing the redirection landing page directly.
  • You can back the client-side Captcha validation by also validating the same user input on the server once the page is posted and before recording the user comment.
  • But since you are keeping the correct Captcha solution on the client for validation, bots can have easy access to that code and then always solve the Captcha correctly.

The exact details would of course depend on your specific use-case and Captcha integration scenario. But essentially, all client-side code is insecure and can be faked or modified by malicious parties. As a consequence, Captcha codes must only be kept on the server, and all Captcha validation must be performed on the server as well.

Client-side CAPTCHA validation – the solution

You can avoid full page postbacks by using ASP.NET Ajax or another Ajax library to make asynchronous Captcha validation requests to the server, and processing the result on the client:

  • When the Ajax Captcha validation fails, you can show the user a new Captcha image without affecting the rest of the page, thus improving the user experience and overall usability of the page.
  • You should always change the Captcha code in such cases, since allowing multiple attempts at solving the same Captcha makes OCR guessing much easier.
  • When the Ajax Captcha validation succeeds, you should then submit the page to the server and validate the user Captcha input again.
  • Only after successful server-side Captcha validation should you execute the "protected" operation (e.g. record the user comment) on the server.

Is it possible to validate the user Captcha input in a static context (outside the the ASP.NET Web Form lifecycle)?

The BotDetect values you need for static validation can be accessed when the Captcha is added to the page using:

string captchaId = ExampleCaptcha.CaptchaId;
string instanceId = ExampleCaptcha.CaptchaControl.CurrentInstanceId;

You would then persist the captchaId and instanceId values by adding them to the page as hidden field values (or just read them from the automatically created BotDetect client-side object, or use whatever other approach is suitable).

Assuming you can send the captchaId, instanceId and userInput values with your Ajax / Web Service request, you can get the validation result using the static Validate() variant declared in the CaptchaControl class:

bool validationResult = 
  BotDetect.Web.CaptchaControl.Validate(captchaId, userInput, instanceId);

What is the difference between ValidationAttemptOrigin.Server and ValidationAttemptOrigin.Client?

The two ValidationAttemptOrigin values were added to support a specific use-case involving Ajax Captcha validation: If somebody starts with a regular full post-back Web Forms page using BotDetect Captcha, and wants to improve the accessibility of the form by performing asynchronous Captcha validation.

The improvement Ajax Captcha validation brings is that users will get faster feedback when they enter an incorrect Captcha code, and they won't have to reload the full form and other form values. When users enter a valid Captcha code, a full post-back is performed and the form is validated again on the server (since all client-side code is unsafe and can be tampered with by malicious clients).

So to support this use-case, we have to allow the same Captcha code to be successfully validated twice (once from an Ajax request, and once in the full form post-back). This is the reason why we added the ValidationAttemptOrigin.Client flag, which signals the Captcha validation not to remove the successfully validated Code immediately (since it will need to be validated again in the full form post-back). All other validation attempts remove the stored Captcha code after one validation (regardless of the result) for security reasons.

In conclusion, ValidationAttemptOrigin.Client is used when performing Ajax Captcha validation, and ValidationAttemptOrigin.Server is used when performing full post-back Captcha validation.

I implemented BotDetect Captcha protection, but my form is still getting some suspicious submissions. How can I check that the Captcha validation works?

BotDetect includes a Troubleshooting utility that allows you to generate a log of all Captcha validation attempts (check the Captcha Troubleshooting Example coming with the BotDetect installation). You can turn such logging on for a while, and then review did the Captcha validation always fail when an incorrect code was submitted and succeed when it was correct.

If Captcha validation worked properly but your form was still getting submitted even with incorrect Captcha code entries, it's probably a problem in your code. You should review your form logic, and check that the protected action can only be performed after form validation (including Captcha) succeeds.

Captcha validation seems to work properly, but submitted data is suspicious. Could a bot be bypassing the Captcha?

If all submissions are sent with the correct Captcha code value, it very unlikely that any bot is cracking Captcha images with different drawing algorithms used.

To determine what is really happening, maybe the recorded data contains some clues. Beside the frequency of entries written (every few seconds or minutes), do you have any other indications that it's bots and not humans – i.e. nonsensical data, links to spam websites, empty or obviously incorrect Captcha code values, etc.?

Or maybe there are multiple submissions with the same data, with some people submitting the page multiple times after they solved the Captcha once and Session["IsHuman"] has been set for them (if you use that approach)? Is there any logical explanation for human users submitting the form as often as it happens?

IIS Log File Troubleshooting

Now, when we eliminate that option, it's still possible that some other problem is allowing somebody to bypass the Captcha protection. To investigate this, your IIS log file can be useful:

  • Does every form submission have a form load preceding it, and a Captcha image request? A human visitor pattern would be: GET the page, GET the Captcha image, POST the form, GET the page, GET the Captcha image, POST the form, etc.

    A typical bot pattern would be: GET the page, GET the Captcha image, POST the form, POST the form, POST the form etc., or maybe GET the page, GET the Captcha image, POST the form, GET the Captcha image, POST the form etc.
  • Do you also log visitor cookies? If so, do the SessionID cookie values change for each of these fast-paced submissions, or they are all using the same SessionID? Or maybe they are all coming with Cookieless paths (SessionID embedded in Url)?
  • Do all of these attempts come with the same User Agent, or different ones?

Throttling Access

Another thing that you should keep in mind is that you shouldn't allow an unlimited number of Captcha solving attempts. For example, if a user tries to solve the Captcha 10 times, and fails every time, it's probably a bot (if you use readable ImageStyle and CodeLength settings, of course) and should be banned from further attempts.

If you allow a single user to try to solve the Captcha indefinitely, that's a security issue. So you could have a Session state counter recording total Captcha validation attempts by the user, and automatically consider them a bot after 10 failed attempts.

Now, if it's a typical bot, it will not persist SessionID properly and will start a new Session for every attempt. In that case, the above measure won't work, and you should use elementary DOS protection, keeping a cache of requests by IP address.

If all of these attempts are coming from the same IP, you should only allow (for example) 10 form submit attempts per IP per 1h, and if any particular IP address breaks this limit, ban it for 1h. If a certain IP continues to break the limit after lifting the ban, you can ban it for progressively longer periods of time, etc.

Occasionally my web site is seeing an excessive number of Http requests to BotDetectCaptcha.ashx. The number of requests is more than 100x the count of other path requests for the same period. Do you have any idea what is going on?

After examining the logfile you sent us, we found the following:

  1. Somebody from the IP address aaa.bbb.ccc.ddd (redacted) requested the BotDetectCaptcha.ashx path 9564 times directly, with some sort of bot using (or masquerading as) IE 6.0. This Url was requested several times per second, making it obvious the requests were automated.
  2. All those requests didn't get this bot anything except 302 redirects. The Captcha querystring is unique for each Captcha code, and the code is kept in Session state. When each bot request was made, the ASP.NET runtime detected the user doesn't have a valid Session state, and sent the 302 redirect code.

    Since the bot then ignored the redirect and requested the exact same Url again, there was actually not a single Captcha code even generated on the server, let alone any Captcha images or sounds.

    This behavior (ignoring the redirect) makes us believe the bot wasn't automating IE 6.0, but was some sort of a (badly) written custom program.

Based on this information and the naivety of the attempt, this doesn't appear to be a serious threat for your website. However, you should continue monitoring your logs, since this big number of requests could consume a lot of bandwith when and if the person writing the bot learns what a redirect is :)


There is no 100% effective way to prevent bots from even trying to access your site like this, but there are several measures you can take:

  • You can file an abuse report to the ISP this IP address is alocated to - you can find this by querying the WHOIS database with the IP address in question. This probably won't prevent the individual responsible from trying again, but if they were naive enough to run the bot from their home or work computer, they could theoretically be identified. ISPs are often too busy to react to each abuse report they get, but it can't hurt to try.
  • This large number of requests with the same characteristics (the same client IP, the same user agent, the same request path and querystring) could be considered a simple DOS attempt, and treated accordingly: you could count how many Captcha requests with the exact same querystring come to your server from each IP address, and start blocking further requests after a certain threshold is crossed.

    For example, if 50 requests are made within 5 minutes, the IP address gets blocked for an hour, and if it happens again after that hour, the IP address gets blocked for 6 / 12 / 24 hours, etc. This could be implemented as a simple ASP.NET HttpModule.

As you can see, Captcha protection can prevent bots from filling out your forms, but it can't stop all sorts of hacking attempts. Captcha is a specific security measure used to stop a specific sort of security problems, and it should be used along with other, complimentary security measures preventing other possible problems (such as DOS attempts).