Yam Big Richard

Big Rich from Birmingham likes music, art and parties. Will code for cash. Likes socialising with beer. Likes digital art. Reads too many magazines. Procrastinates far too much.
Recent Tweets @
Who I Follow

staceythinx:

Jerry Gretzinger has been mapping the imaginary land of Ukrania for 30 years. What began as a doodle on a single pice of paper has grown into a fully imagined small country with cities and farms. You can learn more about the project in this video:

Jerry’s Map from Jerry Gretzinger on Vimeo.

(via roomthily)

I’ve spent the last two days trying to work out why I couldn’t get any values passed between the view and controller using the ReCaptcha for .NET plugin, or indeed any other plugin I tried, it appears that currently you can’t do this.  So I had to hack a little to get something working.

First up, I added the reference and generated the ReCaptcha widget within  my view:

@Using Recaptcha

<input type="text" id="commentAuthor" />
<textarea id ="commentContent"></textarea>
@Html.Raw(Html.GenerateCaptcha("captcha", "clean")) @Html.ValidationMessage("captcha")
<a href="#" id="addComment" title="Add Comment">Add Comment</a>

I then created a javascript function that grabbed the challenge field and response field from the ReCaptcha widget so that they could be stringyfied and sent to the controller via a JSON post.

function addComment() {

var challengeField = $("input#recaptcha_challenge_field").val();
var responseField = $("input#recaptcha_response_field").val();

if (responseField.length != 0 )
{
var validation = { ChallengeField: challengeField,
ResponseField: responseField };
var dat = JSON.stringify({ VAL: validation});

$.ajax({
type:"POST",
url:url,
data:dat,
datatype:"JSON",
contentType:"application/json; charset=utf-8",
success: function (returndata) {
if (returndata.ok) {
//validation fine do something
}
else {
//validation failed do something else
}

}
}
);
}
}

Then in my controller method I created a new ReCaptcha item and validated against the values that I passed to it and returned JSON response.

public ActionResult AddComment(Comment COMMENT, Validation VAL)
{
if (ModelState.IsValid)
{
RecaptchaValidator rv = new RecaptchaValidator();
rv.RemoteIP = Request.ServerVariables["REMOTE_ADDR"];
//this is set in the web.config
rv.PrivateKey =
System.Configuration.ConfigurationManager.AppSettings["ReCaptchaPrivateKey"];
rv.Challenge = VAL.challengeField;
rv.Response = VAL.responseField;
RecaptchaResponse rresp = rv.Validate();

if (rresp.IsValid)
{
//do something with your data and return ok
return Json(new { ok = true });
}
else {
//something wrong with the validation
return Json(new { ok = false, message = rresp.ErrorMessage });
}
}
else
{
//something else is broken
return Json(new { ok = false, message = "Error adding comment" });
}

}

Echo Lake - Even The Blind
Dead Berlin - Hipnosis
Ombre - Tormentas
Ssaliva - AVE
Will Stratton - Who Will
Jacaszek - White Wind Dance
The XX - Fiction (Kid Smpl Remix)
Black Sabbath - Planet aravan (Poolside Rework extended intro)
Monomono - Water Pass Gari (Pts 1 & 2)
Free Association - Purple Mikes
Fela Kuti - No Possible (Joystick Jays Vu Remix)

I answered some questions on the Processing forum which led to me writing this small function to determine the centroid of a polygon it uses this formula to determine the position of the centroid, Cx and Cy:

image

image

where

image

it might be useful for someone.


class pPoint {
 int x, y;
 
 pPoint(int x, int y) {
  this.x = x;
  this.y = y;
 } 
}

void setup() {
 
 size(400,400);
  
 noFill();
 rect(10,10,200,200);
 pPoint[] pArray = new pPoint[4];
 pArray[0] = new pPoint(10,10);
 pArray[1] = new pPoint(210,10);
 pArray[2] = new pPoint(210,210);
 pArray[3] = new pPoint(10,210);
  
 
  
 pPoint pp = getCentroid(pArray);
 
 println(pp.x + "," + pp.y);
 ellipse(pp.x, pp.y, 2, 2);
  
} 

pPoint getCentroid(pPoint[] pArray) {
  
  int X = 0;
  int Y = 0;
  int A = 0;
  
  int numPoints = pArray.length;
  println(numPoints);
  
  for (int i = 0; i < numPoints-1; i++ ) {
        
   X = X +  (pArray[i].x + pArray[i+1].x) * 
             (pArray[i].x*pArray[i+1].y - 
              pArray[i+1].x*pArray[i].y);
     
   Y = Y +  (pArray[i].y + pArray[i+1].y) * 
             (pArray[i].x*pArray[i+1].y - 
               pArray[i+1].x*pArray[i].y);
               
   A = A + ((pArray[i].x * pArray[i+1].y) - (pArray[i+1].x * pArray[i].y));
               
  }
  
  A = A/2;
  
  X = X/(6*A);
  Y = Y/(6*A);
  
  pPoint pp = new pPoint(X,Y);
  return pp;
  
  
}
  
  

BMXing

Edge Detection Video

imContinuing on with the image processing theme I looked at blurring an image.   Gaussian Blur is similar to the Sobel edge detection algorithm insomuch that for each pixel you are processing it looks at the surrounding pixels to determine what change needs to be done.  Each pixel becomes the average of the pixels around it, except that it is a weighted average, this means that the pixels closer to the centre are of more importance.

To what importance we need to give each pixel surrounding and including the centre, we need to use this equation:

image

Where x is horizontal distance from the centre pixel, y is the vertical distance from and sigma is the blur value (the higher the value the more the blur).

In processing the equation would look like:

    weightingValue = 1/(2*PI*sigma*sigma) *
                        exp(-1*((x*x)+(y*y))/(2*(sigma*sigma)));

By plugging in the x,y and sigma values for each pixel we end up with 9 weighting values that I store in an array.  These values need to be normalised before they can be used to change the colour values of a pixel, to do this is simple, add up all of the current weighting values and then multiply by 1/total, this ensures that the total of the new values will add up to 1.

Now you have the weighting values you can step through every pixel to determine the colour intensities of the surrounding values and then multiply them buy the weighted value which are then added together to create a new value.

Working out the top middle value and adding together would look like this:

      //top middle
      //get colour of top middle pixel
      px = pixels[((y-1)*width)+x];
      // get r,g and b values for top middle pixel and multiply by weighting
      redvalue = red(px)*kernelWeightings[1];
      bluevalue = blue(px)*kernelWeightings[1];
      greenvalue = green(px)*kernelWeightings[1];
      // add weighted r,g and b values together
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

Once all of the colour values of the surrounding pixels have been weighted and added together we can then plug them back into the orignal pixel, note that I have used a array of pixels to do this as we don’t want to change the original values until everyone of them has been modified.

     pixelarray[x+(width*y)] = color(redIntensity,greenIntensity,blueIntensity);                 

After this we can use the array of pixels to create a new image.

   for(int i = 0; i<width*height;i++) {
       pixels[i] = pixelarray[i];
   }    
background(0);   
   updatePixels();

Passing through the pixels more than once will produce an image that looks something like this: 

image

Full code:

float sigma = 3.5;  //The blur factor

float returnWeightingValue(float x, float y) {
  
  float weightingValue;
  
  //Gaussian Equation
  weightingValue = 1/(2*PI*sigma*sigma) *
                        exp(-1*((x*x)+(y*y))/(2*(sigma*sigma)));
  return weightingValue;
}

void gaussBlur() {
  float p = returnWeightingValue(0,0);
  float[] kernelWeightings =  new float[9];
 
  float normVal = 0;
  
  //top left
  kernelWeightings[0] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[0];
  
  //top middle
  kernelWeightings[1] = returnWeightingValue(0,1);
  normVal+= kernelWeightings[1];
  
  //top right
  kernelWeightings[2] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[2];
  
  //mid left
  kernelWeightings[3] = returnWeightingValue(1,0);
  normVal+= kernelWeightings[3];
  
  //middle
  kernelWeightings[4] = returnWeightingValue(0,0);
  normVal+= kernelWeightings[4];
  
  //mid right
  kernelWeightings[5] = returnWeightingValue(1,0);
  normVal+= kernelWeightings[5];
  
  //bottom left
  kernelWeightings[6] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[6];
  
  //bottom middle
  kernelWeightings[7] = returnWeightingValue(0,1);
  normVal+= kernelWeightings[7];
  
  //bottom right
  kernelWeightings[8] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[8];
  
  for (int i = 0; i<9; i++) {
    kernelWeightings[i] = (1/normVal)*kernelWeightings[i];
  }
  
  loadPixels();
  int[] pixelarray = new int[width*height];
  
  float redvalue, bluevalue, greenvalue;
  float redIntensity, greenIntensity, blueIntensity;
  
  color px;
  
  for (int x = 1 ; x < width-1; x++) {
    for (int y = 1; y < height-1; y++) {
  
    //top left Pixel
      px = pixels[((y-1)*width)+(x-1)];
      redvalue = red(px)*kernelWeightings[0];
      bluevalue = blue(px)*kernelWeightings[0];
      greenvalue = green(px)*kernelWeightings[0];
      redIntensity = redvalue;
      greenIntensity = greenvalue;
      blueIntensity = bluevalue;
      
      //top middle
      px = pixels[((y-1)*width)+x];
      redvalue = red(px)*kernelWeightings[1];
      bluevalue = blue(px)*kernelWeightings[1];
      greenvalue = green(px)*kernelWeightings[1];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;
      
      // top right 
      px = pixels[((y-1)*width)+x+1];
      redvalue = red(px)*kernelWeightings[2];
      bluevalue = blue(px)*kernelWeightings[2];
      greenvalue = green(px)*kernelWeightings[2];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      
      //middle left pixel
      px = pixels[(y*width)+(x-1)];
      redvalue = red(px)*kernelWeightings[3];
      bluevalue = blue(px)*kernelWeightings[3];
      greenvalue = green(px)*kernelWeightings[3];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      //middle
      px = pixels[(y*width)+x];
      redvalue = red(px)*kernelWeightings[4];
      bluevalue = blue(px)*kernelWeightings[4];
      greenvalue = green(px)*kernelWeightings[4];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;
      
      //middle right
      px = pixels[(y*width)+x+1];
      redvalue = red(px)*kernelWeightings[5];
      bluevalue = blue(px)*kernelWeightings[5];
      greenvalue = green(px)*kernelWeightings[5];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      
      //bottom left
      px = pixels[((y+1)*width)+(x-1)];
      redvalue = red(px)*kernelWeightings[6];
      bluevalue = blue(px)*kernelWeightings[6];
      greenvalue = green(px)*kernelWeightings[6];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      //bottom middle
      px = pixels[((y+1)*width)+x];
      redvalue = red(px)*kernelWeightings[7];
      bluevalue = blue(px)*kernelWeightings[7];
      greenvalue = green(px)*kernelWeightings[7];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      //bottom right
      px = pixels[((y+1)*width)+x+1];
      redvalue = red(px)*kernelWeightings[8];
      bluevalue = blue(px)*kernelWeightings[8];
      greenvalue = green(px)*kernelWeightings[8];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      pixelarray[x+(width*y)] = color(redIntensity,greenIntensity,blueIntensity);              
    }
  }

 for(int i = 0; i<width*height;i++) {
  pixels[i] = pixelarray[i];
 }     
  
 background(0);    
 updatePixels();

  
  
}

void setup () {
  
  size(375,500);
  PImage img = loadImage("richard2.jpg");
  image(img,0,0);
   
}

void draw() {
  
  if (mousePressed == true) {
    gaussBlur();
  }
  
}

Here is a quick run through of the code used to do an edge detection in Processing.  First up I create an array that can contain all of the values of the screen intensities, place an image on the screen and load the image pixels into the system Pixels[] array:

  int[] pixelarray = new int[width*height];
  PImage img = loadImage(“image.jpg”);
  image(img,0,0);
  loadPixels(); 

Iterating through each individual pixel, I get red, green and blue intensity values for that pixel add them together and depending on the values in the Sobel operator I have multiplied and added them to the overall intensity.  For example the top left pixel in the Sobel matrix:

      px = pixels[((y-1)*width)+(x-1)];
      redvalue = red(px);
      bluevalue = blue(px);
      greenvalue = green(px);
      intensity = redvalue + greenvalue + bluevalue;
      Gx += -intensity;
      Gy += intensity;

Once I have worked out values for all of the surrounding pixels I work out using Pythagoras the overall gradient length.   Then I normalise it so that it can be used as a output value.

      //calculate normalised length of gradient
      glength = sqrt((Gx*Gx)+(Gy*Gy));
      glength = (glength/4328) * 255;     

I load this new information into the array of intensities which is used to create the image of the outline.  I have also used a threshold value to remove the detail from within the picture.

      if (glength > 10)
        pixelarray[x+(width*y)] = color(glength);
      else
        pixelarray[x+(width*y)] = color(0);

Finally I load the array into the system array of pixels which is used to display the new image.

   for(int i = 0; i<width*height;i++) {
     pixels[i] = pixelarray[i];
   }

Full code - https://github.com/bigrichardc/sketchbook/blob/master/sobelImageDetection.pde

I’ve been doing some experiments with altering colour values of digital images and video depending upon their initial values, this led me to looking into how edge detection worked, doing a quick web search led me to pages about the Sobel operator.  Back in the mists of time I did a mathematics degree but after many years of not using any of the hard (or even easy) maths I was taught, I have forgotten it all, and so was left scratching my head looking at the various matrices and equations and words like convolution that I came across.

After a bit more head scratching and considering putting on the too hard to do file, I found this Python/C++ tutorial on the which allowed me to work out what was going on and realise that the maths stuff just looked scary. 

In basic terms the Sobel operator is used to determine the intensity of colour of a pixel and of it’s neighbours then using a method called convolution approximates the difference in intensity along the x and y axis.  A bit of a mouthfull I know but basically it does this:

  1. Get the intensities of pixel x,y and of it’s 8 surrounding pixels.
  2. Work out the change in intensity of the pixels across the x-axis.
  3. Work out intensity change in the y-axis.
  4. Use Pythagoras to work out overall change for those pixels.

As seen in my bad diagram.

image

If you do this for each pixel on the screen you will get an intensity change level for each will allow you to build a picture showing the outlines like so:

image

Another thing you need to know is that the Sobel operator consists of two matrices that look like this:

Sobel Operator

Basically these are the values you multiply the intensity of each pixel surrounding the pixel you are working out the change in intensity for, going left to right (Gx) and from top to bottom (Gy).  So when working out the change across the x-axis the position in the matrix corresponds to the value you would multiply the intensity of the pixel by, for example the pixel intensity in the top left hand corner by -1 and the intensity of the bottom right by +1.

That really is all you need to know in order to be able to detect edges in an image, there are other types of operators that can be used to determine edges each with their own properties but as yet I have not looked at them.

For more formal information including scary looking maths here’s the Sobel operator Wiki page - http://en.wikipedia.org/wiki/Sobel_operator

If that is not making too much sense my next post will look at the code I used to produce the above image which hopefully makes things clearer.

Colour detection on BMX video.

Experimenting with pixel colours.

Screen shot of the rotating squares sketch below.