This will entirely depend on what prop you are wanting to automate and the event classes required. However at the basic level, you will require:
Our goal is make our solution simple enough for any prop maker to fit into props. Our connections at the logic level (3.3 - 5.0V) are generally crimped 'Dupont' style. For power handling, such as motors etc, we use the most appropriate connections.
However we understand that more complex props with interactivity and simulations require a consultative approach. We ready to help you team in whatever way necessary to achieve the goals. This could mean bespoke code and events, through to integration assistance both on-site and on-set.
Hopefully not! We have tried to make the operation of our solutions as simple as possible, so almost anyone within a production team can control the props from our user interface.
However, any changes to the events will need to be carried out by someone with a basic understanding of code so that no mistakes are made. We do have manuals on our support page and we also issue initial training as part of our advanced solution.
We are currently operating at a timescale of 4-6 weeks depending on the complexity of the project and resources required.
Websockets over TCP/IP. We mostly use industry standard wireless networks so that there are no additional cables on set which could pose as a trip risk - no breaking a leg! Wireless also ensures that props are quick and easy to reposition.
Where appropriate, we can also communicate to props through power over ethernet, (POE) which enables us to use just one RJ45 cable to provide both data and power to powered devices (PD) instead of two cables.
It is also possible to communicate to props being used in an underwater scene by switching from wifi to lower radio frequencies which can pass through water with relative ease. A quick fact that you may or may not know, is that because wifi uses higher frequencies between 2.4 and 5 GHz, the waves are more easily absorbed and scattered by water, causing distortion. For this environment we are able to waterpoof the prop(s) and ruggedise them.
Asked by Dominic ...
The fundemental reason is the flexibility of the messaging format. Props need a lot more data namespace and value ranges than lighting fixtures.
We love DMX512 in the right place though, so we have a SWN-DMX unit that can pass events to DMX - more of this later.
DMX512 Essentially has a fixed data frame that passes a sequence of 512 channels where a value can range between 0-255. This is data in one direction.
The Set Wide Net uses Websockets to pass JSON formatted events. This gives an infinite namespace of devices each of which can use a rich data format.
Of great importantance is that props can publish their own events, this gives us two way communication.
Examples:
{'e':'Counter1', 'd':{'target' : 'CounterValue', 'startvalue' : 47610000, 'direction' : 'reverse', 'rate' : 10, 'step' : 100 }} // Start Counting REVERSE at 47610000 10x faster rate in steps of 100
{'e':'LED1', 'd':{'sequence': [{'duty' : 100, 'rate' : 800},{'duty': 10, 'rate' : 100} ,{'duty':0,'rate' : 2000}]}} // Fade 'egg' LED in a continous cycle from 100% to 10% to 0% at different rates
For more advanced behaviour we can use the Set Wide Net for Separations of concerns. A good example is simulation, where we can have a simple code block listening for events and responding to them.
You can see short video clip on this simulation example.
The code fragment below deals with two engine 'props' another fragment is listening to 'throttle' and 'load' controls which are practical props that an actor really interacts with. If the 'simulated' temperatures exceed limits the 'engine' prop will emit an 'Alarm' event. If the temperature increases to the 'failure' limit, the engines will shutdown, The 'Master Alarm' prop might listen for 'Alarm' events from the engines and emit a 'Master Alarm' event. The gauge and UI elements will for temperatures, revs, state etc and show them accordingly.
This is a simple example of how we can use the Set Wide Net to create a complex system.
async def update_engine():
alarm = False
master_alarm = False
engines = [engine1, engine2]
while True:
# Deal with the engines
for engine in engines:
engine.update()
if abs(engine.temperature - engine.last_temperature) > 2:
message_queue.append({"e": engine.meter_temperature, "d": "{:.1F}".format(engine.temperature)})
engine.last_temperature = engine.temperature
if abs(engine.rpm - engine.last_rpm) > 80:
message_queue.append({"e": engine.meter_rpm, "d": "{:.1F}".format(engine.rpm)})
engine.last_rpm = engine.rpm
if engine.state != engine.last_state:
if engine.state == "Failure":
alarm = True
message_queue.append({"e": engine.failure_UI, "d": "visible"})
else:
if not alarm:
message_queue.append({"e": engine.failure_UI, "d": "hidden"})
# Deal with the load meter
if abs(engine1.load - engine1.last_load) > 0.05:
message_queue.append({"e": "Stepper4", "d": {"position" : int(engine1.load*100) }})
engine1.last_load = engine1.load
#deal with the Master Alarm
master_alarm = False
for engine in engines:
if engine.state != engine.last_state:
if engine.state == "Alarm":
master_alarm = True
message_queue.append({"e": "UI4", "d": "visible"})
else:
if master_alarm == False:
message_queue.append({"e": "UI4", "d": "hidden"})
engine.last_state = engine.state
# rest a bit
await asyncio.sleep(0.3)
You can see that different teams of prop makers can craft props that work in concert together whilst interacting with the actors such that the actors experience a script motivated realism.
Here's a tease of what we can do with DMX:
{'e':'DMX512', 'd':{'channel' : 1, 'value' : 255}} // Turn on the first light to full brightness
And here's an example of our SWN-DMX unit performing autonomous actions on an event:
{'e':'DMX1', 'd':{'fade': 4, 'channel' : 5, 'value' : 255}}
{'e':'DMX1', 'd':{'fade': 2, 'channel' : 4, 'value' : 16}}
{'e':'DMX1', 'd':{'fade': 1, 'channel' : 3, 'value' : 0}}
The above example shows 3 different channels of DMX fading to new values but at different rates. Once this event is received the SWN-DMX unit calculates all the intermediate values and sends the required DMX stream.
Asked by Sian ...
The answer is YES, and there are many versions of yes.
Most VP is made using plates and scenes rendered by Unreal Engine onto volumes. Actors in front of the volume could use props in the conventional way, all our control systems will work.
However, it gets more interesting when one wants to integrate foreground, background, cinamatic lighting and off-screen elements like wind and smoke.
Here's a quick re-definition of props for this section of the FAQ:
UE can emit and consume Websocket events, in UE5.5 there is a blueprint for Websockets that's attached to running instance. This can be referenced by individual assets and UE actors. For example a light could controlled by an external event, or the distance of the camera from an asset can be published to the SWN.
Collisions are a useful example, collisions in UE could be published to the SWN so that physical props may react to them.
The difference in the events is that the SWN uses JSON formating and UE uses text, within UE, both Python and C++ can be used to parse JSON.
Just for fun - you could imagine our actors in a volume interacting with a screen which itself is rendering from another instance of UE, one could then use events to synchronise action between the background and foreground UE instances.
We get it, people enjoy writing the scripts for Arduino to animate props - there's a lot of satisfaction in seeing a prop come to life.
Using Automated Film Prop solutions is actually a similar approach, except that you're writing and thinking at a higher level. For example we've abstracted all the classes and objects you need to drive a gauge. You still need to know what values to send to the gauge, and what sort of dampening will make the gauge work for the narrative.
You're working at the event level within the server, the prop modules are then doing the heavy lifting for hardware interaction. So, if you're working on a collection of props, you can sit back and watch the interaction between then. You're not having to pull a prop apart to get a USB connection to change a parameter - you just send the new values as an event.
You can create content for our Screebleys using HTML pages which are fairly easy to learn, however if you want to create something more sophisticated, you would require knowledge of SVGs with layers. That way you can send commands to individual layers for sequences such as a 'peril scenario'.
WiFi communications could cover several books, but here's a bullet point guide.
Yes - Absolutely -- please get in touch.
The easiest way to contact us is via email at either: rachael@automatedfilmprops.com for general enquiries, or andy@automatedfilmprops.com for specific technology questions. You can also find these on our contact us page.
Privacy is our top priority, how do you ensure that your network is secure?
The SWN (Set Wide Net) server is secured and encrypted with SSH, so you'll need an account and password to login, transfer files, stop/start services etc.
The Prop side of the SetWideNet uses a light password and should not be considered secure. This is deliberate, as this network does not connect to the Internet and we don't want any unnecessary security holding up production work. Equally the Prop Ethernet side has no security -- all that you find on this side is the Websockets events and HTTP traffic to the virtual props.
You want it secure? We can provide a SWN server with multiple interfaces, an extra Ethernet port can be configured with full security X509 Certificates and the whole nine yards.