Downloading Torrents Remotely

I’ve set up a home server and wanting a simple straight forward way to download torrents remotely I relied on an old hack I’ve heard about but never attempted. You can configure transmission the torrents client to pick up torrents files from a certain directory, sharing this directory on google drive means that you can drop torrent files for transmission to download for you at home.

Screen Shot 2016-03-15 at 4.21.56 PM.png

The problem is you can’t really monitor the progress of the torrents and some of these torrent files may not even start. So I decided to write a small shell script that monitors two events and updates me through push bullet, the first event being the torrent download start (creation of a *.part file) the second event is the completion of the download (new file in the downloads directory).

The script works as follows:

Screen Shot 2016-03-15 at 4.15.30 PM.png

 
#!/bin/bash
partsLocation=<DOWNLOADS LOCATION HERE>
completeLocation=<COMPLETE LOCATION HERE>
logLocation=./TorrentsMonitor.log
pushBulletAPI=<PBAPI KEY HERE>

for i in $(find $partsLocation -name "*.part" -maxdepth 1|sed -e "s/ /_/g");
do 
echo "Handling File"
echo $i;
echo "----------------------"

#apply check here if file exists
countOfParts=$(cat parts.log|grep $i|wc -l)
echo $countOfParts
if [ $countOfParts -gt 0 ]
then 
echo "already listed"
else 
echo "new file, adding to parts.log"
echo $i >> parts.log
curl -u $pushBulletAPI: https://api.pushbullet.com/v2/pushes -d type=note -d title="Tor Started" -d body="Download started for file $i"
fi
done

#-------------------------
#scan for complete files
#-----------------------------

for i in $(find $completeLocation ! -name '*.part' ! -name '*.log' ! -name "*.sh*" ! -name "." -maxdepth 1|sed -e "s/ /_/g");
do 
echo "Handling File"
echo $i;
echo "----------------------"
#apply check here if file exists
countOfComplete=$(cat complete.log|grep $i|wc -l)
echo $countOfComplete
if [ $countOfComplete -gt 0 ]
then 
echo "already listed"
else 
echo "new file, adding to complete.log"
echo $i >> complete.log

curl -u $pushBulletAPI: https://api.pushbullet.com/v2/pushes -d type=note -d title="Tor Completed" -d body="Download Completed for file $i"
fi
done
Advertisements

IPTV Analytics and Its Impact On Privacy

I don’t think anybody can deny that IPTV is the future, in few years the current paradigm of TV watching experience will be extinct. Already most people are used to being able to scroll backwards and forwards through content TV seems outdated. Which explains the rise of content services such as Hulu and Netflix. DVRs such as Tivo aim to provide that, stripping content from delivery, the content which you can consume in any order you choose rather than having to consume it based on a schedule someone in the  network believed it’d work with most people. Its no surprise that all the major IT companies Google/Amazon/Apple and even Seagate are tapping into DVR/IPTV on some capacity or another.

 

When I first booted up my new Samsung smart TV, I was shown a message that said, the more you watch this TV the more content it’ll be able to recommend for you. Instantly I wondered about where all this analytical data is stored and who has access to it.

 

TVs are slowly transforming into two way devices that pick content and consume it based on the viewers choices rather than just projecting pre-scheduled content. Basically its becoming closer to the internet browser metaphor, the analytical information associated with these choices provide a whole new angle on how user consume media. Classically the only way TV networks had to measure the viewership of any particular show was through surveys, with IPTV they can just resort to analytics that record every remote click the viewer presses, harvesting this information provides highly granular viewership data unlike any they’ve ever had.

 

Netflix Watching the viewers as House of Cards new season goes live

Netflix Watching the viewers as House of Cards new season goes live

The smash hit “House of Cards” an award winning highly addictive series was produced by NetFlix. Big Data analytics approach was used to make most decisions related to the show, from picking the actors, to the director even as far as picking the premise of the entire show. They knew were able to predict the potential viewership well before producing it. The same way they are able to suggest to their users shows they will most probably like.  Even after production, they are able to monitor who watched it at what pace and which parts do people usually skip or replay.

 

Basically as you watch TV, the IPTV provider is watching you. It learns your TV watching habits,  which channels you spend more time watching, which shows you like, and which ads you skip through. Similar to an internet browser, with one big difference, you can’t turn off the cookies and surf in private mode, you cant turn it off. The Set Top Box is provided by the network and its tamper proof, if you are using it its in constant contact with some server in some data center collecting data on your usage habits, perhaps even using it to provide you with context aware ads much like Google’s adsense.

 

So does my telecom/Cable company knows that I secretly like to watch #RichKids of Beverly Hills ?

 

It gets even more interesting when you realize that most telecom providers are heading towards IPTV, Telecoms already know a lot about you. Your usage trends, where you go, whom you call and even sometimes which websites you visit. Most telecos have extensive profiles on each of their users, and they use such data to build promos and offers. Imagine how powerful they’ll be if they know your TV watching experience as well. With everybody worried about privacy in the age of social networks and mobile they seem to be oblivious to the new threat to privacy sitting innocently in each living room in the planet.

 

The telescreen is always watching

In George Orwell’s novel 1984, each home was equipped with a Telescreen, Playing propaganda and constantly watched the viewers.

 

The silver lining of all of this is how instead of watching irrelevant ads, perhaps someday the ads you see in TV will be compatible to your needs. Content production will be quite different in the future, with shows tailored to the customers viewing habits rather than surveys. Increasing the synergy between production and consumption, bringing theater’s main advantage -being able to get instant feedback to the production-to TV. Maybe soon you’ll be able to rate each episode as you watch it and your set top box would curate and serve content  based on these selections and ratings. That comes at a cost of course, the complete loss of privacy and being watched by our TVs.

 

As an after thought, think of what politicians can do with such a technology, receiving instant feedback to the talk shows they appear on, and whether people keep watching or flip to another channel.

 

 

sources :

http://www.dataenthusiast.com/2014/02/big-data-analytics-and-netflixs-house-of-cards/

http://www.marketplace.org/topics/business/what-happens-netflix-when-house-cards-goes-live

Pachube Monitoring/Reporting for Free

Owning your own cacti instance is a luxury not many enjoy, I happen to be one of them, and even though it brings me a lot of pleasure of seeing completely useless data plotted against time, operating and keeping it running is costly (since I have it hosted on Amazon EC2) not to mention the cheer effort of making sure the environment its hosted on is working as expected. https://pachube.com/ solves that, it provides mainly a plotting service, you beam up your data and it gets represented on a chart, for a slightly OCD person like me this is a dream come true.

They have a several subscription plans, the free one has everything I need but is limited to 1 month of historical data also it is limited to around 500 readings per day, and a limited number of graphs, still thats more than I need for the purposes I have in mind for it.  An additional advantage that comes with that website is the presence of a Java API wrapper that can be used to interface with the service to insert entries to be plotted, the difference between that and cacti is the fact that its alot easier for passive readings, as in readings that aren’t triggered by cacti.

What I have in mind now is using this along with my arduino heat sensor to report on the temperature and humidity in my apartment, using a simple laptop with an internet connection to plot these values, I’m going to blog about that in a later entry.

Arduino First Project (PIR Java motion sensor)

As a computer science graduate I didn’t get a chance to build many projects that functions outside the computer, Locked within that dimension I was able to interface with reality only through people, as in I relied on people’s input rather than sensors. I decided to change that, using some wires, the serial port and the Arduino Uno unit.

My ardunio Equipment

I ordered it few days ago, along with two sensors PIR (analog motion sensor) and Heat and Humidity digital reader, between these two sensors I’m going to build several applications that reads things from the physical dimensions and log them or perhaps even take action based on them, In a later stage I’m planning to take it to the next level making my code commit physical actions based on certain on-line events (twitter, email, or even some nagios alarm).

Installing the Aurdio wasn’t an issue, it was too simplistic to even mention it here, same goes for writing my first sketch that reacted to the push of a button (they use the word sketch cause obvious computer engineers avoid using the word code). For my first project I decided to build a motion sensor that interacts with a java code, I had two challenges:

Challenge I Hooking up the parts:

I brought up the PIR page, it had 3 pins, 1 GND (ground), 1 Power 5v and 1 sensor, using the cables I was smart enough to order I hooked up the ground to the ground, the power to the power and the sensor bit to the input pin2, online there was that simple sketch that detects when the pin goes low and writes to the serial port, which I compiled on arduino IDE and uploaded to the device, soon enough the device was detecting movement, I created my first analog motion sensor.

int ledPin = 13;                // choose the pin for the LED
int inputPin = 2;               // choose the input pin (for PIR sensor)
int pirState = LOW;             // we start, assuming no motion detected
int val = 0;                    // variable for reading the pin status

void setup() {
  pinMode(ledPin, OUTPUT);      // declare LED as output
  pinMode(inputPin, INPUT);     // declare sensor as input

  Serial.begin(9600);
}

void loop(){
  val = digitalRead(inputPin);  // read input value
  if (val == HIGH) {            // check if the input is HIGH
    digitalWrite(ledPin, HIGH);  // turn LED ON
    if (pirState == LOW) {
      // we have just turned on
      Serial.println("Motion detected!");
      // We only want to print on the output change, not state
      pirState = HIGH;
    }
  } else {
    digitalWrite(ledPin, LOW); // turn LED OFF
    if (pirState == HIGH){
      // we have just turned of
      Serial.println("Motion ended!");
      // We only want to print on the output change, not state
      pirState = LOW;
    }
  }
}

Challenge II having my Java code read the output :

Slightly trickier, it included mainly getting one of the libraries available in the arduino installation folder (RXTXcomm.jar) also for some reason that’s beyond me to copy rxtxSerial.dll from the arduino installation folder to c:\windows\system32 folder, once that was done I had an application that could read my arduion’s output (which was connected on COM10.

package aurdino;

import java.io.InputStream;
import java.io.OutputStream;
import gnu.io.CommPortIdentifier;
import gnu.io.SerialPort;
import gnu.io.SerialPortEvent;
import gnu.io.SerialPortEventListener;
import java.util.Enumeration;

public class serial_test implements SerialPortEventListener {
	SerialPort serialPort;
        /** The port we're normally going to use. */
	private static final String PORT_NAMES[] = {
			"/dev/tty.usbserial-A9007UX1", // Mac OS X
			"/dev/ttyUSB0", // Linux
			"COM10", // Windows
			};
	/** Buffered input stream from the port */
	private InputStream input;
	/** The output stream to the port */
	private OutputStream output;
	/** Milliseconds to block while waiting for port open */
	private static final int TIME_OUT = 2000;
	/** Default bits per second for COM port. */
	private static final int DATA_RATE = 9600;

	public void initialize() {
		CommPortIdentifier portId = null;
		Enumeration portEnum = CommPortIdentifier.getPortIdentifiers();

		// iterate through, looking for the port
		while (portEnum.hasMoreElements()) {
			CommPortIdentifier currPortId = (CommPortIdentifier) portEnum.nextElement();
			for (String portName : PORT_NAMES) {
				if (currPortId.getName().equals(portName)) {
					portId = currPortId;
					break;
				}
			}
		}

		if (portId == null) {
			System.out.println("Could not find COM port.");
			return;
		}

		try {
			// open serial port, and use class name for the appName.
			serialPort = (SerialPort) portId.open(this.getClass().getName(),
					TIME_OUT);

			// set port parameters
			serialPort.setSerialPortParams(DATA_RATE,
					SerialPort.DATABITS_8,
					SerialPort.STOPBITS_1,
					SerialPort.PARITY_NONE);

			// open the streams
			input = serialPort.getInputStream();
			output = serialPort.getOutputStream();

			// add event listeners
			serialPort.addEventListener(this);
			serialPort.notifyOnDataAvailable(true);
		} catch (Exception e) {
			System.err.println(e.toString());
		}
	}
	/**
	 * This should be called when you stop using the port.
	 * This will prevent port locking on platforms like Linux.
	 */
	public synchronized void close() {
		if (serialPort != null) {
			serialPort.removeEventListener();
			serialPort.close();
		}
	}

	/**
	 * Handle an event on the serial port. Read the data and print it.
	 */
	public synchronized void serialEvent(SerialPortEvent oEvent) {
		if (oEvent.getEventType() == SerialPortEvent.DATA_AVAILABLE) {
			try {
				int available = input.available();
				byte chunk[] = new byte[available];
				input.read(chunk, 0, available);

				// Displayed results are codepage dependent
				System.out.print(new String(chunk));
			} catch (Exception e) {
				System.err.println(e.toString());
			}
		}
		// Ignore all the other eventTypes, but you should consider the other ones.
	}
}

Magic happened and the java application was reading the output, an output that I now hookup to Nagios (since its analog) or have it tweeted or even go as far as having the arduino take action based on that (turn on the lights). I reckon its a lot simpler to build in shell script, however I chose Java since I have several  projects in mind.

****UPDATE****

Spent a couple of hours trying to get it to work from shell script, the main problem was the fact that i had to read from a live stream and detect a certain event that will only be available when i query for it, finally I decided to go java all the way, and that dear reader is why I usually prefer java to shell scripting, anyway right now I have this application that detects movement around my laptop and tweets it.

Monitoring Hosts on Nagios Without NRPE

I was placed in the situation where I had to monitor a set of highly critical hosts that have a minimal RHEE installation, lacking wget, yum or even gcc* packages, installing NRPE on these machines wasn’t possible, further more being as critical as they were I wasn’t really allowed to roll up my sleeves and install all of the prerequisites working my way to NRPE. In this post I’ll talk about how I managed to monitor these machines with minimal modification to their set up.

First of all I created the user Nagios on all of the hosts, then while logged on the Nagios machine I exported my ssh key to all of them, making sure that I can log into each and every one of them without having to type in the password.d

ssh-copy-id -i ~/.ssh/id_rsa.pub nagios@Host1

I was supposed to monitor the disk space on /, /boot and the load average on each of these machines, so I built a simple script to work with nagios’s “check_by_ssh” plugin, mainly the script queried the values, compared it against certain threshold and exited with the appropriate code (0 : ok, 1 : warning, 2 :critical).

disk.sh
#!/bin/bash
##checks the used disk space for nagios
##usage disk.sh mountpoint critical_used%value warning_used%value
size=`df -Ph $1 | tail -1 | awk '{print $5}'`
size=$(echo ${size%\%})

if [ $size -gt $2 ]
then
echo "Critical $1 size exceeded $2 % current size $size "
exit 2;
fi

if [ $size -gt $3 ]
then
echo "Warning $1 size exceeded $3 % current size $size"
exit 1;
fi

echo "OK $1 curent size $size %"
exit 0;

and

load.sh
#!/bin/bash
##checks last 15 minutes load average, if more than 3 critical, more than 2 warning
##usage load.sh
loadavg=`uptime | awk '{print $11}'`
# bash doesn't understand floating point
# so convert the number to an interger
thisloadavg=`echo $loadavg|awk -F \. '{print $1}'`
if [ "$thisloadavg" -ge "3" ]; then
 echo "Critical - Load Average $loadavg ($thisloadavg) "
 exit 2
else
if [ "$thisloadavg" -ge "2" ]; then
 echo "Warning - Load Average $loadavg ($thisloadavg) "
 exit 1
else
 echo "Okay - Load Average $loadavg ($thisloadavg) "
 exit 0
fi
fi

I then deployed these scripts using the scp command to all of the targeted machines, making sure that the files are executable and reachable, I chose to place them in /usr/share/nagios_scripts to make the life of the other administrators easier.

On my nagios machine I added a new configuration directory to the nagios.cfg file and placed a new hosts.cfg in it that included all the hosts I made sure to add my personal touch (an icon for each machine to appear on the hosts list as well as the map).

Finally I created a services.cfg file and added the services definition.

define service{
        use                             local-service
        host_name                       host1,host2,host3
        service_description             Load_Avg_(5mins)
        check_command                   check_by_ssh!nagios!'/usr/share/nagios_scripts/load.sh'
        notifications_enabled           0
        }

define service{
        use                             local-service
        host_name                       host1,host2,host3
        service_description             /boot_Disk_Space
        check_command                   check_by_ssh!nagios!'/usr/share/nagios_scripts/disk.sh /boot 90 75'
        notifications_enabled           0
        }
define service{
        use                             local-service
        host_name                       host1,host2,host3
        service_description             /_Disk_Space
        check_command                   check_by_ssh!nagios!'/usr/share/nagios_scripts/disk.sh / 90 75'
        notifications_enabled           0
        }

A quick $ /etc/init.d/nagios reload and everything was working, all in all it took little under 2o minutes.