SpeedCam – Getting the Data

*** UPDATE ***

DO NOT PURCHASE THIS SOFTWARE.  The company is no longer activating the software.  If you have purchased it, please request a refund from FastSpring (support@fastspring.zendesk.com)


I recently got to be in be on the news for a fun project (see the bottom of the article for the video).  We have had issues with cars speeding down our street.  I have had the traffic department place the street sign that showed your speed down the street.  This did give us some data, but people seeing the signs changed their driving during that drive only.

Being a person that works with data, I thought there has to be a way to track this data source.  I tried to build my own system to track the cars going by.  After trying a few different things, Arduino and Raspberry Pi, I started reading on using a webcams to track cars.

My setup is as followed:
Camera: HIKVision IP Camera (but a USB camera will work also as shown in the news video)
Power Injector: TP-LINK TL-PoE150S
Computer: Dell Laptop running Windows 10
Speed Camera Software: SpeedCam AI *see notes about the software
Data Analyst Tool: Splunk

I tried a few different programs and found SpeedCam AI.  This program let me draw a rectangle and define the distance.  I know that the sections of the street are 15 feet (4.572 meters) in length.

I set up two different lanes.  Lane 1 is for West bound traffic and Lane 2 is for East bound traffic.  In the settings you can specify what the delimiter.  You can also use the software to save a picture of the vehicle, and clean up the reports.

With SpeedCam AI writing the details of traffic to a csv file, Splunk can easily ingest the data.

Installing Splunk on Windows
Installing Splunk on Linux

Adding the data to Splunk:
Once you log in to Splunk, you should see an “Add Data” button.

There is a couple options for bringing the data in.  Select “Monitor” to be able to continuously bring in the data.

You will then want to select “Files & Directories”.

Click “Browse” to select your “reports.csv” file and then click “Next”.

You should see a preview of your data.  You will see that Splunk has identified the data in a csv file.  Since the file doesn’t have a header row, you will need to give it one.  In the delimited settings, in the Field names section, click Custom.  In this example I used “datestamp,lane,speed,speedLabel”.  Then click next to continue.

It should prompt you to save your custom sourcetype.  Click Save.

I gave the sourcetype name as “speedcam”.  I then gave it a description and left the category and app the defaults and then click Save.

On the next page we can set the hostname for the data stream. Normally you can leave this the default. In a production environment, we would also want to choose our index. For this example, I am going to leave it as “Default”. At this point we can click “Review”.

You can review the setting and then click Submit and it will start bringing in your data.


For the Command Line People
## inputs.conf ##
[monitor://c:\program files(86)\SpeedCam\reports\reports.csv]
sourcetype = speedcam

## props.conf ##
[speedcam]
INDEXED_EXTRACTIONS = CSV
FIELD_DELIMITER = ,
FIELD_NAMES = datestamp,lane,speed,speedLabel
CHECK_FOR_HEADER = false
SHOULD_LINEMERGE = false


At this point, you have the SpeedCam AI software running and Splunk bringing the data in.  I will follow up with another post on the Splunk App I have written.  In the mean time, here are a few videos on searching and reporting in Splunk.

Basic Searching in Splunk
Creating Reports in Splunk Enterprise
Create Dashboards in Splunk Enterprise


Things to note:
I have submitted a few tickets to the maker of SpeedCam AI. I have not received responses to the support request. I have tried to move the software to another computer and was unable to activate the trial version (there is no trial download on the website) or register with my activation code.  I can not say if the software will still work.


ElectroSmash pedalShield Mega – Part 1

My oldest son has been getting really in to music lately.  He has taught himself guitar, bass, ukulele, piano, and most recently violin.  Having an electrical background, I started to look at the different ways the pedals and guitars are put together.  I started to look at the pedal clones and wanted to do a pedal for my son.  After looking around I saw the pedalShield series.  I like working with Arduinos and Raspberry Pis as you still get to use real components and easily interact with them. The pedalShield Mega looked interesting as it has an LED screen on it to help you see your effects.  I was also interested in being able flash new effects on the pedal as needed.

I have decided to give it a go and have ordered the pedalSHIELD MEGA Kit.  They give you all the schematics and part numbers (minus the LED) to order them yourself from Mouser.  Pricing it out, you do save money ordering the kit directly from ElectroSmash.  The only problem for me is that it is international shipping so a bit of a wait.  I also needed to order the Arduino Mega 2560 board.  That is the brains of the programmable pedal.  My normal go to is Adafruit.  On their site it lists the board as discontinued (link).  After reading a few reviews, I decided to go with a clone board from Amazon.  I went with the Elegoo LYSB01H4ZDYCE-ELECTRNCSMEGA 2560 R3 Board.  While I was on the Amazon site, I felt that to do the job properly I need a new soldering iron, helping hands, and cutter.  The quick math is that I will be doing around 141 solder points for this project.

So far I have spent $108.92 on the project:
$14.86 – Arduino Mega 2560 Clone
$25.85 – Tools
$00.00 – Amazon Prime Shipping
$53.84 – ElectroSmash Kit
$14.37 – Shipping from ElectroSmash

I will still need to get some stand offs to make sure everything is nice and stable when he steps on the pedal and the case enclosure.

I have been going through the forums and looking at the work other have already done with the programming.  I look forward to this project as I haven’t done a project like this in a while.

Geist Watchdog 15, SNMP, and Splunk

I have a few of the Geist Watchdog 15 devices in my data center.  They do a good job monitoring, but getting data out of them isn’t as easy as it could be.  Their latest firmware does introduce JSON over XML.  Unfortunately, there is no way to do API calls to return certain time frames.  You have to download the whole log file.  Geist heavily uses the SNMP method to pull the information.  While this is normally ok, but you do need the custom MIB file for the device which makes it a pain.  I tried multiple ways to have Splunk grab the values from the device, but failed each time.  With a deadline to produce a dashboard (it was 11pm and we had people visiting the office at 8am), I put my Google, Linux, and Splunk skills to a test.

First, let’s install the SNMP tools.

# yum install net-snmp net-snmp-devel net-snmp-utils

Let’s check where the default location of the MIBs are.


# net-snmp-config --default-mibdirs
/root/.snmp/mibs:/usr/share/snmp/mibs

We will want to copy the MIBs to the second location.

# cp /tmp/geist_bb_mib.mib /usr/share/snmp/mibs/geist_bb_mib.mib
(Source location will differ.  The location /tmp/ was where I copied the file to)

Referencing the MIB Worksheet, we can find the OID for the items we want.  In this script I selected: internalName, internalTemp, internalDewPoint, internalHumidity, tempSensorName, tempSensorTemp

Geist does not put the first period for the OID.  In the worksheet they list internalName as 1.3.6.1.4.1.21239.5.1.2.1.3 where the SNMP call would be to .1.3.6.1.4.1.21239.5.1.2.1.3.  We also need to reference the device ID for the OID at the end of the OID.  The base for the Remote Temperature Sensor is .1.3.6.1.4.1.21239.5.1.4.1.3.  To call the first Remote Temperature Sensor I would reference .1.3.6.1.4.1.21239.5.1.4.1.3.1 and the second Sensor is .1.3.6.1.4.1.21239.5.1.4.1.3.2.

To make the call to the device using SNMP, we will be using the snmpget command.

# /usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.4.1.3.1

-m all = Use all of the MIB files
-Ov = Print values only
-v 2c = Use version 2c
-c  public = Use the public snmp string
10.10.10.10 = IP address of the Watchdog 15
.1.3.6.1.4.1.21239.5.1.4.1.3.1 = tempSensorName for Device 1

STRING: ExternalTempSensor1

We are almost there.  Now to clear up the return to only give us the second part of the response.

 # /usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.4.1.3.1 | awk '{print $2}'
 ExternalTempSensor1

Great, now we are getting just the value.  Time to tie the field and value together.  Since the internal name is going to be the same but we are gathering multiple values, I am also adding the _temp so I am able to tell which field I am getting.

InternalName01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.3.1 | awk '{print $2}'`
 InternalTemp01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.5.1 | awk '{print $2}'`
 Section01=$InternalName01"_temp,"$InternalTemp01
 echo $Section01
 ExternalTempSensor1_temp,871
 

Almost there, now let’s add a date/time stamp.

InternalName01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.3.1 | awk '{print $2}'`
 InternalTemp01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.5.1 | awk '{print $2}'`
 Section01=$InternalName01"_temp,"$InternalTemp01
 echo -e `date --rfc-3339=seconds`","$Section01
 2016-05-16 22:07:57-05:00,ExternalTempSensor1_temp,871
 

I repeated the section for the different pieces of sensor data I wanted and ended up with a small script.

#!/bin/bash

InternalName01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.3.1 | awk '{print $2}'`
 InternalTemp01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.5.1 | awk '{print $2}'`
 Section01=$InternalName01"_temp,"$InternalTemp01
 echo -e `date --rfc-3339=seconds`","$Section01

InternalDewPoint01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.7.1 | awk '{print $2}'`
 Section02=$InternalName01"_dewpoint,"$InternalDewPoint01
 echo -e `date --rfc-3339=seconds`","$Section02

InternalHumidity01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.2.1.6.1 | awk '{print $2}'`
 Section03=$InternalName01"_humidity,"$InternalHumidity01
 echo -e `date --rfc-3339=seconds`","$Section03

RemoteName01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.4.1.3.1 | awk '{print $2}'`
 RemoteTemp01=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.4.1.5.1 | awk '{print $2}'`
 Section04=$RemoteName01"_temp,"$RemoteTemp01
 echo -e `date --rfc-3339=seconds`","$Section04

RemoteName02=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.4.1.3.2 | awk '{print $2}'`
 RemoteTemp02=`/usr/bin/snmpget -m all -Ov -v 2c -c public 10.10.10.10 .1.3.6.1.4.1.21239.5.1.4.1.5.2 | awk '{print $2}'`
 Section05=$RemoteName02"_temp,"$RemoteTemp02
 echo -e `date --rfc-3339=seconds`","$Section05

2016-05-16 22:12:57-05:00,Base_temp,873
 2016-05-16 22:12:57-05:00,Base_dewpoint,620
 2016-05-16 22:12:57-05:00,Base_humidity,43
 2016-05-16 22:12:57-05:00,ExternalSensor1_temp,688
 2016-05-16 22:12:57-05:00,ExternalSensor2_temp,717

I created a folder /opt/scripts/ and /opt/scripts/logs/.  I placed the script in /opt/scripts/ and named it geist.sh.  I set the script to be able to run with:

# chmod +x /opt/scripts/geist.sh

I then add it to the crontab.

# crontab -e

*/1 * * * * /opt/scripts/geist.sh >> /opt/scripts/logs/`date +”%Y%d%m”`_geist.log

You can verify that the script is set to run with:

# crontab -l

*/1 * * * * /opt/scripts/geist.sh >> /opt/scripts/logs/`date +"%Y%d%m"`_geist.log

Now we can log in to Splunk and add the log in to Splunk.  After you log in, go to Settings and then Data inputs.

datainputs

Under the Files & directories, click the Add new link.

addnew

Under the Full path to your data, enter the path to the log file you are writing in the crontab.  Check the box for the More settings option.

adddata1

You can set the Host that will be indexed with your data.  In the source type, select From list and then select csv.  You then can select an index for the log files.

adddata2

Now we will set up the field extractions.  You will need to edit the props.conf and transforms.conf files.  If you want to keep this in a certain application, change the file path to $SPLUNK_HOME/etc/apps/{appname}/local/props.conf.

# vi $SPLUNK_HOME/etc/system/local/props.conf
[csv]
 REPORT-Geist = REPORT-Geist

# vi $SPLUNK_HOME/etc/system/local/transforms.conf

[REPORT-Geist]
 DELIMS = ","
 FIELDS = "DateTime","SensorName","SensorValue"

Restart Splunk and you should be able to search you SNMP values.

# $SPLUNK_HOME/bin/splunk restart

The Hacker Manifesto turn 30

The Hacker Manifesto turns 30 today. I remember the first time reading this. I still get goosebumps. I lived the era of the BBS. I was the kid tying up the phone line. I remember the rush of connecting to systems and exploring. Talking to people I didn’t know but I did know them.  We shared knowledge and experience.
 
We were the Keyboard Cowboys, the System’s Samurai, and the Phone Phreaks.

\/\The Conscience of a Hacker/\/

Hacking In Paradise 2013 – Why I want to go

Joseph McCray (@j0emccray) is someone who I have been listening to and watching videos of for a while now.  I first saw him at Defcon.  He is “The only black guy at security conferences”.  With the growth of the security industry, there are “experts” coming out of the wood work.  I had to put experts in quotes because it seems like everyone has an opinion.  There are more certification tags floating around tacked on to peoples names than I can believe.  In this world where everyone has gone through “training”, training to pass a test, it is hard to find the people that truly have a passion and dedication to true security.

So this comes to why I want to go.  For a while part of my job has been in security.  I have written policies to tell people what to do and what not to do.  I have help guide companies in “best practices”.  I have helped people gain access in to systems that they got locked out of.  And I have done more of the old school hacking.  This type of hacking involves taking things a part to see how they work and how they can be made better or defeated.  This is a lot of my daily job as a systems engineer.  Working in the corporate world has taught me that everyone sets things up differently and sometimes you need to reverse engineer how they configured things to know how to make it work.  So why would I want to go?  Because I don’t know enough.  There is so much out there that I don’t know.  Going over the list of topics that are covered strikes a little fear in me.  Topics like Metasploit, Maltego, Nmap, Nikto, IDS, HIDDS, NIDS, SIEM.  I will need a translator just for the names and acronyms.

This type of training is the type I truly enjoy.  You are completely immersed in to the training.  With you being away from work and in an environment with your peers and instructors.  You end of living the training and bouncing the ideas off each other.  While doing some activity, a conversation will strike up about a topic and you send the next hour working through ideas.  In the CyberWar class, you get to attack fully patched newer OS (Windows 7, Server 2008R2, and Linux) with all the intrusion detection tools turned on.  You get to see the logs and alerts that are generated.  You don’t just go and learn about tools, you learn why these tools work and what effect these tools have on the systems.  This is how training should be run!

Hacking In Paradise 2013
http://strategicsec.com/services/training-services/classroom/hacking-in-paradise/

DEFCON 17: Advanced SQL Injection
http://www.youtube.com/watch?v=rdyQoUNeXSg

DEFCON 18: Joseph McCray – You Spent All That Money and You Still Got Own
http://www.youtube.com/watch?v=aYVFBnurpNY

Omaha/Lincoln Splunk User Group – Update

I have stated on two different posts (http://www.anthonyreinke.com/?p=610http://www.anthonyreinke.com/?p=605) about starting a Splunk User Group in the Omaha/Lincoln area.  The first meeting will be on March 12th from 6pm to 9pm at Charlies on the Lake in Omaha.  Register for the event at http://t.co/syA5AFTO7U.

VENUECharlies on the Lake
4150 South 144th Street
Omaha, NE 68137
Website | DirectionsWHENTuesday, March 12th
6:00pm – 9:00pmAGENDA

  • What’s New in Splunk 5.0? Presentations by Splunk SEs
  • Open Forum

Splunk RSS Splunk Facebook Splunk Twitter Splunk LinkedIn

Hi There,Don’t forget to register for the Splunk User Group in Omaha on March 12th! We’ll get together to share ideas and learn from one other.Whether you are getting started, creating intelligent searches and alerts or building complex dashboards, this group is for you. Meet other Splunk users and get tips you need to be more successful.Click here to register. There is limited availability, so register today to secure your spot. Expect lots of discussion, snacks, drinks and, of course, t-shirts!

For any questions about this meeting, feel free to contact:
Mike Mizener
mike.mizener@continuumww.com
402.916.1803

We look forward to seeing you!

The Splunk Team and Continuum

 

Splunk and the engine for machine data are registered trademarks or trademarks of Splunk Inc., and/or its subsidiaries and/or affiliates in the United States and/or other jurisdictions. All other brand names, product names or trademarks belong to their respective holders.  © 2013 Splunk Inc. All rights reserved.

To unsubscribe from future emails or to update your e-mail preferences click here.
To forward this email to a friend, click here.

Splunk Inc. | 250 Brannan St. | San Francisco, CA 94107

 

My first non tutorial Arduino project

I have been playing with the Arduino Uno board and after going through a bunch of tutorials, I wanted to branch out and do my own.  I have the Ultrasonic Module HC-SR04 and a standard piezoelectric buzzer.  On the ultrasonic module, VCC goes to digital pin 2.  Trig goes to digital pin 3.  Echo goes to digital pin 4.  GND goes to the ground rail which connects to GND pin on the arduino.  On the buzzer, the positive lead goes to pin 11 and the negitive pin goes to the ground rail which is connected to the GND pin on the arduino.    Below is the code:

 

void setup() {
 pinMode (122,OUTPUT);//attach pin 2 to vcc
 pinMode (5,OUTPUT);//attach pin 5 to GND
 // initialize serial communication:
 Serial.begin(9600);
 pinMode(11, OUTPUT); // sets the pin of the buzzer as output
}
void loop()
{
digitalWrite(122, HIGH);
 // establish variables for duration of the ping,
 // and the distance result in inches and centimeters:
 long duration, inches, cm;
// The PING))) is triggered by a HIGH pulse of 2 or more microseconds.
 // Give a short LOW pulse beforehand to ensure a clean HIGH pulse:
 pinMode(3, OUTPUT);// attach pin 3 to Trig
 digitalWrite(3, LOW);
 delayMicroseconds(122);
 digitalWrite(3, HIGH);
 delayMicroseconds(5);
 digitalWrite(3, LOW);
// The same pin is used to read the signal from the PING))): a HIGH
 // pulse whose duration is the time (in microseconds) from the sending
 // of the ping to the reception of its echo off of an object.
 pinMode (4, INPUT);//attach pin 4 to Echo
 duration = pulseIn(4, HIGH);
// convert the time into a distance
 inches = microsecondsToInches(duration);
 cm = microsecondsToCentimeters(duration);

 Serial.print(inches);
 Serial.print("in, ");
 Serial.print(cm);
 Serial.print("cm");
 Serial.println();

 if (cm < 50) {
 analogWrite(11,128);
 } 
 else {
 digitalWrite(11, LOW);
 }

 delay(100);
}
long microsecondsToInches(long microseconds)
{
 // According to Parallax's datasheet for the PING))), there are
 // 73.746 microseconds per inch (i.e. sound travels at 1130 feet per
 // second). This gives the distance travelled by the ping, outbound
 // and return, so we divide by 2 to get the distance of the obstacle.
 // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf
 return microseconds / 74 / 2;
}
long microsecondsToCentimeters(long microseconds)
{
 // The speed of sound is 340 m/s or 29 microseconds per centimeter.
 // The ping travels out and back, so to find the distance of the
 // object we take half of the distance travelled.
 return microseconds / 29 / 2;
}

Update on Splunk User Group

Recently I shared that I was working with Continuum (http://www.continuumww.com) to start a Splunk User Group in the Lincoln/Omaha area (http://www.anthonyreinke.com/?p=605).  Since then Mike Mizener (mike.mizener@continuumww.com) has found us a location and we agreed upon a first meeting day.  We will be meeting on Tuesday February 26th from 6pm to 9pm at Charlie’s on the Lake (http://www.charliesonthelake.net).  For this first meeting our topic will be: What’s new in Splunk 5.0.  More details coming but if you have ideas for topics or any other questions, please let me know.

Splunk User Group in Lincoln/Omaha Nebraska

I am currently working with Continuum (http://www.continuumww.com) to bring the Lincoln/Omaha area of Nebraska a Splunk user group. I am a big believer in the sharing of knowledge. With that I love to go on to the Splunk Answers site and review issues or questions people have and try to help them. When I was learning IT, someone took the time to answer my questions. I want to give back to the community that has taught me so much. This is where my sports life meets my geek life. I want to be that coach to help others get the most of IT. Look for more information shortly.