Yam Big Richard

Big Rich from Birmingham.  Music.  Art.  Code

June 3, 2014 at 5:28pm
1 note

I’ve been trying to recreate Ben F Laposky’s Oscillons using processing, my cheap camera and ancient laptop but I’m not quite there, no manual focus on the camera doesn’t help.

May 27, 2014 at 4:33pm
0 notes

Geolocations of Spanish Regions

Dunno if this is useful for anyone but it took me quite a while to work out other week, here is a javascript array of geolocations that roughly hit the centre of the autonomous regions and their subdivisions and the subdivision code in google maps.  I used it to plot exciting graphs.  ;)

Here am the code:

var latlong = {};
latlong[0] = {"latitude":38.992766, "longitude":-1.861705, "region":"ES-AB"}; // Albacete
latlong[1] = {"latitude":42.896145, "longitude":-2.674412, "region":"ES-VI"}; // Álava
latlong[2] = {"latitude":38.477476, "longitude":-0.820585, "region":"ES-A"}; // Alicante
latlong[3] = {"latitude":37.411561, "longitude":-2.312036, "region":"ES-AL"}; // Almeria
latlong[4] = {"latitude":43.479436, "longitude":-5.948510, "region":"ES-O"}; // Asturias
latlong[5] =  {"latitude":40.645132, "longitude":-4.874412, "region":"ES-AV"}; // Ávila
latlong[6] = {"latitude":41.840581, "longitude":2.054743, "region":"ES-B"}; // Barcelona
latlong[7] = {"latitude":38.877576, "longitude":-6.462829, "region":"ES-BA"}; // Badajoz
latlong[8] = {"latitude":42.335145, "longitude":-3.674412, "region":"ES-BU"}; // Burgos
latlong[9] = {"latitude":40.006145, "longitude":-6.374412, "region":"ES-CC"}; // Cáceres
latlong[10] = {"latitude":36.806145, "longitude":-5.874412, "region":"ES-CA"}; // Cádiz
latlong[11] = {"latitude":43.296145, "longitude":-3.974412, "region":"ES-S"}; // Cantabria
latlong[12] = {"latitude":33.806145, "longitude":-8.174412, "region":"ES-CU"}; // Cueta
latlong[13] = {"latitude":38.896145, "longitude":-3.674412, "region":"ES-CR"}; // Ciudad Real
latlong[14] = {"latitude":37.896145, "longitude":-4.674412, "region":"ES-CO"}; // Córdoba
latlong[15] = {"latitude":40.096145, "longitude":-2.174412, "region":"ES-CU"}; // Cuenca
latlong[16] = {"latitude":42.196145, "longitude":2.874412, "region":"ES-GI"}; // Gerona
latlong[17] = {"latitude":37.296145, "longitude":-3.674412, "region":"ES-AB"}; // Granada
latlong[18] = {"latitude":40.906145, "longitude":-3.004412, "region":"ES-GU"}; // Guadalajara
latlong[19] = {"latitude":42.896145, "longitude":-2.674412, "region":"ES-VI"}; // Cáceres
latlong[20] = {"latitude":43.256145, "longitude":-2.174412, "region":"ES-SS"}; // Guipúzcoa
latlong[21] ={"latitude":37.85, "longitude":-6.95, "region":"ES-H"}; // Huelva
latlong[22] =  {"latitude":42.35, "longitude":-0.305, "region":"ES-HU"}; // Huesca
latlong[23] = {"latitude":39.896145, "longitude":3.074412, "region":"ES-PM"}; // Islas Baleares
latlong[24] = {"latitude":37.996145, "longitude":-3.774412, "region":"ES-J"}; // Jaén'
latlong[25] = {"latitude":43.396145, "longitude":-8.374412, "region":"ES-C"}; // La Coruña
latlong[26] =  {"latitude":33.506145, "longitude":1.120412, "region":"ES-GC"}; //Las Palmas de Gran Canaria
latlong[27] = {"latitude":41.896145, "longitude":0.724412, "region":"ES-L"}; // Lérida
latlong[28] = {"latitude":42.596145, "longitude":-5.54412, "region":"ES-LE"}; // 'León
latlong[29] = {"latitude":43.196145, "longitude":-7.34412, "region":"ES-LU"}; // 'Lugo
latlong[30] = {"latitude":40.896145, "longitude":-3.774412, "region":"ES-SS"}; // Madrid
latlong[31] = {"latitude":33.206145, "longitude":-6.674412, "region":"ES-ML"}; // Melilla
latlong[32] = {"latitude":37.95, "longitude":-1.095, "region":"ES-MU"}; // Murcia'
latlong[33] = {"latitude":42.35, "longitude":-4.55, "region":"ES-P"}; // Palencia
latlong[34] = {"latitude":42.45, "longitude":-8.595, "region":"ES-PO"}; // Pontevedra
latlong[35] = {"latitude":40.985, "longitude":-5.695, "region":"ES-SA"}; // Salamanca
latlong[36] =   {"latitude":33.506145, "longitude":-1.374412, "region":"ES-TF"}; // Santa Cruz de Tenerife
latlong[37] = {"latitude":41.45, "longitude":-4.195, "region":"ES-SG"}; // 'Segovia
latlong[38] =  {"latitude":37.45, "longitude":-5.95, "region":"ES-SE"}; // Sevilla
latlong[39] = {"latitude":41.85, "longitude":-2.4595, "region":"ES-SO"}; // Soria
latlong[40] = {"latitude":41.485, "longitude":1.195, "region":"ES-T"}; // Tarragona
latlong[41] = {"latitude":40.6785, "longitude":-1.095, "region":"ES-TE"}; // Teruel
latlong[42] = {"latitude":39.85, "longitude":-3.9695, "region":"ES-TO"}; // Toledo
latlong[43] = {"latitude":39.45, "longitude":-0.595, "region":"ES-V"}; // Valencia
latlong[44] = {"latitude":41.72, "longitude":-5.0595, "region":"ES-VA"}; // Valladolid
latlong[45] = {"latitude":43.4, "longitude":-2.695, "region":"ES-BI"}; // Vizcaya
latlong[46] =  {"latitude":41.985, "longitude":-5.695, "region":"ES-ZA"}; // Zamora
latlong[47] = {"latitude":41.65, "longitude":-0.95, "region":"ES-Z"}; // Zaragoza
latlong[48] = {"latitude":42.94, "longitude":-1.695, "region":"ES-NA"}; // Navarra
latlong[49] =  {"latitude":42.285, "longitude":-7.8695, "region":"ES-OR"}; // Ourense
latlong[50] = {"latitude":40.65, "longitude":-0.095, "region":"ES-CS"}; // Castellon
latlong[51] = {"latitude":37.25, "longitude":-4.495, "region":"ES-MA"}; // Malaga
//----these do not  store the actual geo locations  as I am drawing onto AmCharts regions map.
latlong[12] = {"latitude":33.806145, "longitude":-8.174412, "region":"ES-CS"}; // Cueta
latlong[26] =  {"latitude":33.506145, "longitude":1.120412, "region":"ES-GC"}; //Las Palmas de Gran Canaria
latlong[37] =   {"latitude":33.506145, "longitude":-1.374412, "region":"ES-TF"}; // Santa Cruz de Tenerife

October 17, 2013 at 11:32am
0 notes

September 20, 2013 at 8:33am
175 notes
Reblogged from staceythinx


Jerry Gretzinger has been mapping the imaginary land of Ukrania for 30 years. What began as a doodle on a single pice of paper has grown into a fully imagined small country with cities and farms. You can learn more about the project in this video:

Jerry’s Map from Jerry Gretzinger on Vimeo.

(Source: Wired, via roomthily)

June 11, 2013 at 12:13pm
0 notes

ReCaptcha from an AJAX post within a MVC4 view

I’ve spent the last two days trying to work out why I couldn’t get any values passed between the view and controller using the ReCaptcha for .NET plugin, or indeed any other plugin I tried, it appears that currently you can’t do this.  So I had to hack a little to get something working.

First up, I added the reference and generated the ReCaptcha widget within  my view:

@Using Recaptcha

<input type="text" id="commentAuthor" />
<textarea id ="commentContent"></textarea>
@Html.Raw(Html.GenerateCaptcha("captcha", "clean")) @Html.ValidationMessage("captcha")
<a href="#" id="addComment" title="Add Comment">Add Comment</a>

I then created a javascript function that grabbed the challenge field and response field from the ReCaptcha widget so that they could be stringyfied and sent to the controller via a JSON post.

function addComment() {

var challengeField = $("input#recaptcha_challenge_field").val();
var responseField = $("input#recaptcha_response_field").val();

if (responseField.length != 0 )
var validation = { ChallengeField: challengeField,
ResponseField: responseField };
var dat = JSON.stringify({ VAL: validation});

contentType:"application/json; charset=utf-8",
success: function (returndata) {
if (returndata.ok) {
//validation fine do something
else {
//validation failed do something else


Then in my controller method I created a new ReCaptcha item and validated against the values that I passed to it and returned JSON response.

public ActionResult AddComment(Comment COMMENT, Validation VAL)
if (ModelState.IsValid)
RecaptchaValidator rv = new RecaptchaValidator();
rv.RemoteIP = Request.ServerVariables["REMOTE_ADDR"];
//this is set in the web.config
rv.PrivateKey =
rv.Challenge = VAL.challengeField;
rv.Response = VAL.responseField;
RecaptchaResponse rresp = rv.Validate();

if (rresp.IsValid)
//do something with your data and return ok
return Json(new { ok = true });
else {
//something wrong with the validation
return Json(new { ok = false, message = rresp.ErrorMessage });
//something else is broken
return Json(new { ok = false, message = "Error adding comment" });


January 29, 2013 at 1:09pm
0 notes

16Down8MillionToGo by Bigrichardc on Mixcloud

January 15, 2013 at 5:10pm
0 notes

ItsNotShoegaze by Bigrichardc on Mixcloud

Echo Lake - Even The Blind
Dead Berlin - Hipnosis
Ombre - Tormentas
Ssaliva - AVE
Will Stratton - Who Will
Jacaszek - White Wind Dance
The XX - Fiction (Kid Smpl Remix)
Black Sabbath - Planet aravan (Poolside Rework extended intro)
Monomono - Water Pass Gari (Pts 1 & 2)
Free Association - Purple Mikes
Fela Kuti - No Possible (Joystick Jays Vu Remix)

January 13, 2013 at 10:14am
4 notes

Finding the Centroid of Non Intersecting Polygons

I answered some questions on the Processing forum which led to me writing this small function to determine the centroid of a polygon it uses this formula to determine the position of the centroid, Cx and Cy:





it might be useful for someone.

class pPoint {
 int x, y;
 pPoint(int x, int y) {
  this.x = x;
  this.y = y;

void setup() {
 pPoint[] pArray = new pPoint[4];
 pArray[0] = new pPoint(10,10);
 pArray[1] = new pPoint(210,10);
 pArray[2] = new pPoint(210,210);
 pArray[3] = new pPoint(10,210);
 pPoint pp = getCentroid(pArray);
 println(pp.x + "," + pp.y);
 ellipse(pp.x, pp.y, 2, 2);

pPoint getCentroid(pPoint[] pArray) {
  int X = 0;
  int Y = 0;
  int A = 0;
  int numPoints = pArray.length;
  for (int i = 0; i < numPoints-1; i++ ) {
   X = X +  (pArray[i].x + pArray[i+1].x) * 
             (pArray[i].x*pArray[i+1].y - 
   Y = Y +  (pArray[i].y + pArray[i+1].y) * 
             (pArray[i].x*pArray[i+1].y - 
   A = A + ((pArray[i].x * pArray[i+1].y) - (pArray[i+1].x * pArray[i].y));
  A = A/2;
  X = X/(6*A);
  Y = Y/(6*A);
  pPoint pp = new pPoint(X,Y);
  return pp;

January 7, 2013 at 1:18pm
0 notes


0 notes

Edge Detection Video

3 notes

Gaussian Blur in Processing

imContinuing on with the image processing theme I looked at blurring an image.   Gaussian Blur is similar to the Sobel edge detection algorithm insomuch that for each pixel you are processing it looks at the surrounding pixels to determine what change needs to be done.  Each pixel becomes the average of the pixels around it, except that it is a weighted average, this means that the pixels closer to the centre are of more importance.

To what importance we need to give each pixel surrounding and including the centre, we need to use this equation:


Where x is horizontal distance from the centre pixel, y is the vertical distance from and sigma is the blur value (the higher the value the more the blur).

In processing the equation would look like:

    weightingValue = 1/(2*PI*sigma*sigma) *

By plugging in the x,y and sigma values for each pixel we end up with 9 weighting values that I store in an array.  These values need to be normalised before they can be used to change the colour values of a pixel, to do this is simple, add up all of the current weighting values and then multiply by 1/total, this ensures that the total of the new values will add up to 1.

Now you have the weighting values you can step through every pixel to determine the colour intensities of the surrounding values and then multiply them buy the weighted value which are then added together to create a new value.

Working out the top middle value and adding together would look like this:

      //top middle
      //get colour of top middle pixel
      px = pixels[((y-1)*width)+x];
      // get r,g and b values for top middle pixel and multiply by weighting
      redvalue = red(px)*kernelWeightings[1];
      bluevalue = blue(px)*kernelWeightings[1];
      greenvalue = green(px)*kernelWeightings[1];
      // add weighted r,g and b values together
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

Once all of the colour values of the surrounding pixels have been weighted and added together we can then plug them back into the orignal pixel, note that I have used a array of pixels to do this as we don’t want to change the original values until everyone of them has been modified.

     pixelarray[x+(width*y)] = color(redIntensity,greenIntensity,blueIntensity);                 

After this we can use the array of pixels to create a new image.

   for(int i = 0; i<width*height;i++) {
       pixels[i] = pixelarray[i];

Passing through the pixels more than once will produce an image that looks something like this: 


Full code:

float sigma = 3.5;  //The blur factor

float returnWeightingValue(float x, float y) {
  float weightingValue;
  //Gaussian Equation
  weightingValue = 1/(2*PI*sigma*sigma) *
  return weightingValue;

void gaussBlur() {
  float p = returnWeightingValue(0,0);
  float[] kernelWeightings =  new float[9];
  float normVal = 0;
  //top left
  kernelWeightings[0] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[0];
  //top middle
  kernelWeightings[1] = returnWeightingValue(0,1);
  normVal+= kernelWeightings[1];
  //top right
  kernelWeightings[2] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[2];
  //mid left
  kernelWeightings[3] = returnWeightingValue(1,0);
  normVal+= kernelWeightings[3];
  kernelWeightings[4] = returnWeightingValue(0,0);
  normVal+= kernelWeightings[4];
  //mid right
  kernelWeightings[5] = returnWeightingValue(1,0);
  normVal+= kernelWeightings[5];
  //bottom left
  kernelWeightings[6] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[6];
  //bottom middle
  kernelWeightings[7] = returnWeightingValue(0,1);
  normVal+= kernelWeightings[7];
  //bottom right
  kernelWeightings[8] = returnWeightingValue(1,1);
  normVal+= kernelWeightings[8];
  for (int i = 0; i<9; i++) {
    kernelWeightings[i] = (1/normVal)*kernelWeightings[i];
  int[] pixelarray = new int[width*height];
  float redvalue, bluevalue, greenvalue;
  float redIntensity, greenIntensity, blueIntensity;
  color px;
  for (int x = 1 ; x < width-1; x++) {
    for (int y = 1; y < height-1; y++) {
    //top left Pixel
      px = pixels[((y-1)*width)+(x-1)];
      redvalue = red(px)*kernelWeightings[0];
      bluevalue = blue(px)*kernelWeightings[0];
      greenvalue = green(px)*kernelWeightings[0];
      redIntensity = redvalue;
      greenIntensity = greenvalue;
      blueIntensity = bluevalue;
      //top middle
      px = pixels[((y-1)*width)+x];
      redvalue = red(px)*kernelWeightings[1];
      bluevalue = blue(px)*kernelWeightings[1];
      greenvalue = green(px)*kernelWeightings[1];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;
      // top right 
      px = pixels[((y-1)*width)+x+1];
      redvalue = red(px)*kernelWeightings[2];
      bluevalue = blue(px)*kernelWeightings[2];
      greenvalue = green(px)*kernelWeightings[2];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      //middle left pixel
      px = pixels[(y*width)+(x-1)];
      redvalue = red(px)*kernelWeightings[3];
      bluevalue = blue(px)*kernelWeightings[3];
      greenvalue = green(px)*kernelWeightings[3];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      px = pixels[(y*width)+x];
      redvalue = red(px)*kernelWeightings[4];
      bluevalue = blue(px)*kernelWeightings[4];
      greenvalue = green(px)*kernelWeightings[4];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;
      //middle right
      px = pixels[(y*width)+x+1];
      redvalue = red(px)*kernelWeightings[5];
      bluevalue = blue(px)*kernelWeightings[5];
      greenvalue = green(px)*kernelWeightings[5];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      //bottom left
      px = pixels[((y+1)*width)+(x-1)];
      redvalue = red(px)*kernelWeightings[6];
      bluevalue = blue(px)*kernelWeightings[6];
      greenvalue = green(px)*kernelWeightings[6];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      //bottom middle
      px = pixels[((y+1)*width)+x];
      redvalue = red(px)*kernelWeightings[7];
      bluevalue = blue(px)*kernelWeightings[7];
      greenvalue = green(px)*kernelWeightings[7];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      //bottom right
      px = pixels[((y+1)*width)+x+1];
      redvalue = red(px)*kernelWeightings[8];
      bluevalue = blue(px)*kernelWeightings[8];
      greenvalue = green(px)*kernelWeightings[8];
      redIntensity += redvalue;
      greenIntensity += greenvalue;
      blueIntensity += bluevalue;

      pixelarray[x+(width*y)] = color(redIntensity,greenIntensity,blueIntensity);              

 for(int i = 0; i<width*height;i++) {
  pixels[i] = pixelarray[i];


void setup () {
  PImage img = loadImage("richard2.jpg");

void draw() {
  if (mousePressed == true) {

January 4, 2013 at 2:13pm
0 notes

Edge Detection in Processing Using the Sobel Operator (Code)

Here is a quick run through of the code used to do an edge detection in Processing.  First up I create an array that can contain all of the values of the screen intensities, place an image on the screen and load the image pixels into the system Pixels[] array:

  int[] pixelarray = new int[width*height];
  PImage img = loadImage(“image.jpg”);

Iterating through each individual pixel, I get red, green and blue intensity values for that pixel add them together and depending on the values in the Sobel operator I have multiplied and added them to the overall intensity.  For example the top left pixel in the Sobel matrix:

      px = pixels[((y-1)*width)+(x-1)];
      redvalue = red(px);
      bluevalue = blue(px);
      greenvalue = green(px);
      intensity = redvalue + greenvalue + bluevalue;
      Gx += -intensity;
      Gy += intensity;

Once I have worked out values for all of the surrounding pixels I work out using Pythagoras the overall gradient length.   Then I normalise it so that it can be used as a output value.

      //calculate normalised length of gradient
      glength = sqrt((Gx*Gx)+(Gy*Gy));
      glength = (glength/4328) * 255;     

I load this new information into the array of intensities which is used to create the image of the outline.  I have also used a threshold value to remove the detail from within the picture.

      if (glength > 10)
        pixelarray[x+(width*y)] = color(glength);
        pixelarray[x+(width*y)] = color(0);

Finally I load the array into the system array of pixels which is used to display the new image.

   for(int i = 0; i<width*height;i++) {
     pixels[i] = pixelarray[i];

Full code - https://github.com/bigrichardc/sketchbook/blob/master/sobelImageDetection.pde

0 notes

Edge Detection in Processing Using the Sobel Operator (Theory)

I’ve been doing some experiments with altering colour values of digital images and video depending upon their initial values, this led me to looking into how edge detection worked, doing a quick web search led me to pages about the Sobel operator.  Back in the mists of time I did a mathematics degree but after many years of not using any of the hard (or even easy) maths I was taught, I have forgotten it all, and so was left scratching my head looking at the various matrices and equations and words like convolution that I came across.

After a bit more head scratching and considering putting on the too hard to do file, I found this Python/C++ tutorial on the which allowed me to work out what was going on and realise that the maths stuff just looked scary. 

In basic terms the Sobel operator is used to determine the intensity of colour of a pixel and of it’s neighbours then using a method called convolution approximates the difference in intensity along the x and y axis.  A bit of a mouthfull I know but basically it does this:

  1. Get the intensities of pixel x,y and of it’s 8 surrounding pixels.
  2. Work out the change in intensity of the pixels across the x-axis.
  3. Work out intensity change in the y-axis.
  4. Use Pythagoras to work out overall change for those pixels.

As seen in my bad diagram.


If you do this for each pixel on the screen you will get an intensity change level for each will allow you to build a picture showing the outlines like so:


Another thing you need to know is that the Sobel operator consists of two matrices that look like this:

Sobel Operator

Basically these are the values you multiply the intensity of each pixel surrounding the pixel you are working out the change in intensity for, going left to right (Gx) and from top to bottom (Gy).  So when working out the change across the x-axis the position in the matrix corresponds to the value you would multiply the intensity of the pixel by, for example the pixel intensity in the top left hand corner by -1 and the intensity of the bottom right by +1.

That really is all you need to know in order to be able to detect edges in an image, there are other types of operators that can be used to determine edges each with their own properties but as yet I have not looked at them.

For more formal information including scary looking maths here’s the Sobel operator Wiki page - http://en.wikipedia.org/wiki/Sobel_operator

If that is not making too much sense my next post will look at the code I used to produce the above image which hopefully makes things clearer.

January 2, 2013 at 12:50pm
0 notes

Colour detection on BMX video.

0 notes

Experimenting with pixel colours.