Exfiltrating Data using Light

Using python to capture and project data

While reading the news, I have seen a lot of people try exfiltrating data from compromised systems that are air-gapped using a variety of different methods whether it is from light, vibrations or sound. I thought it would be an interesting project to try and exfiltrate some data by flashing different colours on a monitor.

The Basic Idea

The basic idea is getting a screen to flash certain colours in a certain combination and record it using a camera. Then I could get a python script to analyze the video and if the screen flashes red then the bit is 1 and if the screen is blue then the bit is 0 and if it is neither it is a separator which is shown as "|". I can then run this through a program that will reconstruct it to its original message. Simple in concept, but a bit more annoying to figure out.

The Code

Projecting the Message

This code converts text into binary data (8-bit binary letters) and displays it on a turtle graphics window in the form of flashes of red, blue, and green colours. Red represents 1, blue represents 0 and green the meant to be the separator between the two. Through the use of the parser, the user can change the message, the speed between the flashes of colour and the waiting period before and after the message. The binary representation of the message and the time taken to display it are also printed in the console.

import time
import turtle
from argparse import ArgumentParser

def splash(pause, wait, message):
    wn=turtle.Screen()
    wn.setup(width=1.0, height=1.0)
    data_in_binary = ""
    wn.bgcolor("green")
    time.sleep(wait)
    time1 = time.time()
    for letter in message:
        for bit in format(ord(letter), '#010b')[2:]: #8-bit binary letter
            if bit == "1":
                wn.bgcolor("red")
                data_in_binary += "1"
                time.sleep(pause)
            else:
                wn.bgcolor("blue")
                data_in_binary += "0"
                time.sleep(pause)
            end_time = time.time() 
            wn.bgcolor("green")
            time.sleep(pause)
    print(data_in_binary)
    print("time elapsed: ",end_time-time1)
    print("bits: ", len(message)*8)
    wn.bgcolor("green")
    time.sleep(wait)

def main():
    parser = ArgumentParser(description="Light Binary Project Screen")


    parser.add_argument("-m","--message", 
                        action="store", default="testing123,",
                        help="The message. The default is testing123"
                        )
    parser.add_argument("-s","--speed", 
                        action="store", default=0.2,
                        help="Pause between flashes of colour."
                        )
    parser.add_argument("-w","--wait", 
                        action="store", default=10,
                        help="Wait period before and after flashes."
                        )


    args = parser.parse_args()
    try:
        speed = float(args.speed)
        wait = float(args.wait)
    except:
        print("Please put a float for speed")
        exit() 

    splash(speed, wait, args.message)

if __name__ == "__main__":
    main()

Analyzing the video

This code analyzes video data, either from a webcam or from a video file, and extracts binary data from the light values of an individual pixel in the frame of the video picked by the user. The extracted binary data is then processed to remove repetitive bits, remove noise, and finally converted to text format which is printed to the console. The user can specify the video source and the sensitivity.

import cv2
from argparse import ArgumentParser

def click_function(event, x,y,flags,param):
    if event == cv2.EVENT_LBUTTONDBLCLK:
        global global_x, global_y, collection
        global_x = x
        global_y = y
        collection = True

def what_colour(r,g,b):
    if r > 245-sens and g < 10+sens and b < 10+sens:
        return "1"
    elif r < 10+sens and g < 10+sens and b > 245-sens:
        return "0"
    else:
        return "|"

def format(string):
    if string == "":
        return "NO INFO GIVEN"

    # Removing consecutive repeating bits
    format = "" 
    current_bit = string[0]
    current_count = 0
    for i in range(len(string)):
        if string[i] == current_bit:
            current_count += 1
        else:
            if current_count >= fuzzy:
                format += current_bit
            current_bit, current_count = string[i], 1
    if current_count >= fuzzy:
        format += current_bit

    # Removing "|"
    stripped = format.replace("|", "")

    # Adding space every 8 bits
    formatted_binary = ' '.join(stripped[i:i+8] for i in range(0, len(stripped), 8) )

    # Converting 8-bit binary to uefi
    text = "".join(chr(int(byte, 2)) for byte in formatted_binary.split())

    print("\n\n\n\n\n\n\n\n")
    print("Binary: ", formatted_binary)  
    print("Text: ", text)

def webcam(camera_id):
    raw = ""
    collection,raw = False, ""
    vid = cv2.VideoCapture(camera_id)
    print("Camera's up")

    while True: 
        _, frame = vid.read()        
        cv2.imshow("image", frame)
        cv2.setMouseCallback('image',click_function) 

        b,g,r = frame[global_y,global_x]

        if collection:
            bit = what_colour(r, g, b)
            print(bit,end="")
            raw += bit
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    vid.release()
    # Destroy all the windows
    cv2.destroyAllWindows()
    format(raw)

def analyze_file(file_location):
    raw = ""
    global collection
    collection = False
    #Get first frame
    video = cv2.VideoCapture(file_location)
    video.set(cv2.CAP_PROP_POS_FRAMES, 0)
    ret, img = video.read() 

    while True:
        cv2.imshow("image", img)
        cv2.setMouseCallback("image", click_function)
        if cv2.waitKey(1) & 0xFF == ord("q") or collection:
            break

    # close the windowy
    cv2.destroyAllWindows()

    video = cv2.VideoCapture(file_location)
    success, img = video.read()

    while success:
        b,g,r = img[global_y,global_x]        
        bit = what_colour(r, g, b)
        print(bit,end="")
        raw += bit
        # read next frame
        success, img = video.read()
    format(raw)

def main():
    parser = ArgumentParser(description="Light Binary for Reading light")

    parser.add_argument("-c","--camera", 
                        action="store", default=0, choices=("0","1","2","3"),
                        help="Different camera"
                        )
    parser.add_argument("-f","--file", 
                        action="store", default=None,
                        help="video file address file."
                        )
    parser.add_argument("-rgb", 
                        action="store", default=90,
                        help="RGB Senstivity"
                        )
    parser.add_argument("-sens", "-senstivity",
                        action="store", default=1,
                        help="senstivity gets rid of single bit pickups. If you got a good camera you can go to 0")
    args = parser.parse_args()

    try:
        global sens,fuzzy,raw
        sens = int(args.rgb)
        fuzzy = int(args.sens)
        raw = ""
    except:
        print("failed")
        exit()

    if args.file != None:
        analyze_file(args.file)
    else:
        webcam(int(args.camera))

if __name__ == "__main__":
    main()

Improvements

One improvement is splitting up the screen into multiple sections and having multiple streams concurrently showing. This could at least double the speed at which information is shown. This would add an extra level of complexity but with added speed, it is definitely worth the thought.

Another possible improvement is getting rid of the separator colour of green. This would be quite a complex task but it would at least speed up the process by 33%. This would also enable me to just use a flashing light or colour to communicate data. This would be a fun future project.

Final Notes

Overall this was a fun project. If you want to try out this project you can find it here on my GitHub. You will also find out some test videos on my GitHub. If you have any questions you can always leave a comment below or feel free to reach out to me on Twitter at @dingo418.