We have used an existing, robotic, multi-lens, all-sky camera system, coupled to a dedicated data reduction pipeline, to automatically determine orbital parameters of satellites in Low Earth Orbit (LEO). Each of the fixed cameras has a Field of View of 53 x 74 degrees, while the five cameras combined cover the entire sky down to 20 degrees from the horizon. Each of the cameras takes an image every 6.4 seconds, after which the images are automatically processed and stored. We have developed an automated data reduction pipeline that recognizes satellite tracks, to pixel level accuracy ($sim$ 0.02 degrees), and uses their endpoints to determine the orbital elements in the form of standardized Two Line Elements (TLEs). The routines, that use existing algorithms such as the Hough transform and the Ransac method, can be used on any optical dataset. For a satellite with an unknown TLE, we need at least two overflights to accurately predict the next one. Known TLEs can be refined with every pass to improve collision detections or orbital decay predictions, for example. For our current data analysis we have been focusing on satellites in LEO, where we are able to recover between 50% and 80% of the known overpasses during twilight. We have been able to detect LEO satellites down to 7th visual magnitude. Higher objects, up to geosynchronous orbit, were visually observed, but are currently not being automatically picked up by our reduction pipeline. We expect that with further improvements to our data reduction, and potentially with longer integration times and/or different optics, the instrumental set-up can be used for tracking a significant fraction of satellites up to geosynchronous orbit.