Description image

Is This the Next Big Thing in Follow Focus Technology for Video?

Andra Motion Control Unit and AppOne of the next big challenges for product makers is to invent a better follow focus. While autofocus technology is getting better all the time, it still has to be gamed in a way to work in your favor. It works best with a single subject, but if you’ve got multiple people moving all around the frame it can get tricky, even with an AF lock button. Cinema Control Laboratories thinks they have the answer for nearly perfect focus technology: the Andra Motion Focus system.

Here is the intro video:

Thanks to Newsshooter and Matt Allard, here’s how this system works:

Q. How does the Andra system work?

A: The system is essentially a hybridization of a motion capture system and a remote focus pulling system. Using a portable and easy to set up magnetic mo-cap system we’re able to very accurately track subjects and cameras in real time and use that data to drive a lens control motor.  The mo-cap side of the system uses very small sensors which can be mounted to the performer beneath clothing, just like a lavalier microphone.  The user can then decide where they want the focal point, relative to that sensor, and the system does the rest. We are able to get very impressive accuracy which, most importantly allows you to get really crisp eye focus.  We don’t just target the general area of a person, we get the focus right where you want it.

There are two ways to interface with the system.  The basic approach to use an iPad, which opens up a whole new world of creative options.  Another option is to use the hand unit (The Arc) which is similar to hand units currently on the market except that it has a touch screen interface and offers an incredible range of new features, like the ability to simply sequence between desired focal points by simply hitting a button or turning the dial back and forth.

While the system can use an iPad as its main control surface, they are also developing a hand control unit called the Arc:

Andra Arc Unit

Some more interesting tidbits from their FAQ:

Can it be used for all types of lenses?

Yes. Our system is compatible with any lens with a focus gear. From inexpensive lenses such as Rockinon through to high end lenses such as Carl Zeiss and Cooke lenses.

Does this system work 100% of the time?

No tool works or is appropriate for 100% of situations. The Andra Motion Focus system is a new tool that will expand your creative and practical horizons. The more you work with the system, the more you discover about its capabilities. It will definitely allow you to do some things very easily that without The Andra motion focus system are difficult to achieve. It will also allow you to do things that you simply cannot do without lengthy rehearsals and multiple takes. However, there will be times when you’ll just want to pull focus the way you always have done, which is why the Arc remote hand unit is being designed to give you all the familiar responsive manual controls you’ve come to expect from a high end wireless systems.

A lot of focus pulling is intuitive as the actors move – can you make a change in a split second / flick of a switch?

The system is designed to allow you to set up as many potential fixed focal points or performers as you need before the shot. (1 hub required for each performer.) Once you’re in the shot, you can rapidly switch to any other performer or fixed node at the tap of a button. You can also override the focus control with a fine or coarse adjustment whenever you please.

Can Andra be used to make the shot more organic, or does it feel automated?

Andra has infinitely adjustable parameters and has been designed to give you full control over focus, without having to worry about distances. Speed of pulls, composition and any other nuance of focus control is still at your finger tips.

What range could you use this for?

The range of the system is scalable depending on your usage and should be thought more of as area coverage than range. The Standard Package uses two sources on a boom pole mount which gives a 24 ft x 16 ft area of coverage. This means that subjects can get up to 24 feet from camera. However, it is possible to use fixed nodes at longer distances, where depth of field is more forgiving, which allows performers to move in and out of the capture range. For the simplest setups, using camera mounted source with the DSLR package, the system is best used for wide lenses and close up work. This setup has a range of approximately ten feet.

The Andra looks to have tons of customization options for fine-tuning how the focus will actually look, so it’s not just a matter of quick pulls directly to the selected mark. This is a complete FIZ (Focus Iris Zoom) system, and would replace whatever you are currently using. I think the idea of putting tiny sensors where you want the system to focus is pretty ingenious. This way you can have as many points as you need, and there is never anything “blocking” a sensor that is trying to “see” your subjects. Being able to override the system at a moment’s notice is also a huge plus.

This isn’t a system most people would buy, so while pricing is very high ($8,000 to $12,000 for a complete starter system depending on the options selected), it’s actually in line with other professional follow focus systems — and cheaper in some cases. As it is right now, it’s not going to be a solution right away for many productions, but when it’s set up correctly and used in the right environment, it’s a system that looks like it may allow you to do some really, really complicated setups with perfect focus. Even if talent misses their mark completely — the system is still tracking a point on the talent. The only issue is that range is somewhat limited with the way the sensors work, as mentioned in the last FAQ question above.

While I don’t think the AC with a traditional follow focus is going anywhere anytime soon, this is one of those systems that opens your eyes to the possibilities of new follow focus technology, and where we could be headed in the future.

If you want to know when pre-orders are going live, sign up for an email notification here. We’ll definitely be stopping by their booth at NAB, so you’ll here a little more about how the system works in person next week.


[via Newsshooter]


We’re all here for the same reason: to better ourselves as writers, directors, cinematographers, producers, photographers... whatever our creative pursuit. Criticism is valuable as long as it is constructive, but personal attacks are grounds for deletion; you don't have to agree with us to learn something. We’re all here to help each other, so thank you for adding to the conversation!

Description image 28 COMMENTS

  • You can see the mo-cap on the wrists of the dancer. Cool idea but the pulls don’t feel organic the way everything snaps into focus. I’m sure it can be calibrated to achieve exactly the look you are after but why not just have a focus puller then.

  • Cool start. Like Matt said, it doesn’t feel organic, still has an AF feel to it, but I’m sure after further calibration and tweaking, this will be another amazing tool to have on set.

  • AF test on the new Sony A6000. Multiple movements and individuals.
    [ ]

  • It would be amazing if there was a manual mode where you could use the mo cap sensors to know when you were in perfect focus but you could choose to sit there or wander around as you chose.

    • The FAQ mentions that if you want you can use the system just for real-time distance information to all your focus points and then pull focus completely manually whenever you want and however you want.

  • As mentioned some of it seems really robotic but most of it is really incredible. Technology is so rad.

    • I think a lot of the really fast focus movements in the video were made like that to show the possible speed of the system, not mimic organic movement.

      I have not been able to see any kind of focus pumping that you would expect from an autofocus system, so it should be no problem to just configure the zooms to be slower with another kind of acceleration curve (as I understand it, that should be totally possible) to make the zooms look more organic.

      In the case of the focus just following the actress, I think this system already surpasses a lot of human focus pullers. I mean 200m at f2.8 with someone walking quickly straight towards the camera… that is insane!

  • I’ve been dreaming about this product since 2012:

    Glad someone actualized it; “Radio Magic” was never gonna cut it, haha. (I was a sophomore in college, cut me some slack :P)

    My projection is that it won’t be more than five years before the working professional 1st AC has to swallow the reality that pulling focus, one of their current primary responsibilities, might become largely automated, especially on bigger-budget shoots that can afford nice things like these. The pulls might not look organic enough just yet, but I think with some clever programming and feedback from pro AC’s it could be made to look more natural. It’s certainly not something the typical consumer of media would notice, but we are all snobs here, myself included, so I guess that’s a moot point. In any case, it’s just another tool in the shed: Think about how useful this could be for, say, a long-lens shot in which the subject is running through an alley straight at the camera in the dark!

    • You and I both, brother. I had sketches of this same idea from years ago. It was just so obvious to me as a sensible way to pull focus.

    • Yeah, no more compromises would be necessary. I mean now, even when you have a really skilled focus puller, you would probably not use a 100mm f2.8 on a steadicam with the talent running all over the place at different distances to the camera.
      The focus puller would say “sorry guys, but I can’t really guarantee that shot to be in focus”.

      With an automated system like this, it could be done without a single focus problem!

  • Regarding the organic question, I’m pretty sure you can program in a sequence of focus targets for a scene and then just manually pull focus for each… sequentially. Look at the second half of the ‘accuracy’ video on their website.

    • p.s. By which I mean that the system takes care of being in-focus for each of your targets but you have total control over when and how each focus change is done simply by swiping in the iPad app.

    • BobHadABabyItsABoy! on 04.3.14 @ 10:07AM

      I think you are more right then you know. I feel like this is very accurate to how the focus technology curve will develop and also “Vector Video” is the new hot buzz concept . I hope pulling focus in post is developed soon so we do not have to invest so heavily into “antiquated” tech solutions.

  • Well,

    1:05, failed.

    the feeling and things that can change an entire emotinal scene cannot be captured by a machine.

    • This was just a tech demo! The machine isn’t supposed to used in all-auto mode, it is supposed to be a tool for focus pullers to push their limits further.

      Of course it might also be used in full-auto mode if you don’t have a focus puller (better than nothing!) but regarding the price of this machine, I guess it is more aimed at higher budget production that have a focus puller, but need better accuracy for “impossible” shots.

      • … I meant to say “to be used in” not “to used in” – no way to edit here…

  • Christian Anderson on 04.3.14 @ 5:55PM

    This isn’t a machine attempting to replace a man. I can see some amazing shots being pulled off due to the new possibilities this brings.

  • Just… Wow. I’ve been dreaming of this system for ages. I just, I can’t believe how cool this is.

  • Heh… I remember a few years ago dreaming up a Single-Line-Ladar-Focus-Puller that would send out a single line LADAR that scans the middle of the image and gives a wave-form-like line on a GUI that would represent the distances from the camera (where the ladar-emitter/-sensor is hooked up) to the objects of focus. On the gui you would be able to visualize the area in front of the camera, know exactly how far away this horizontal plane was and either track things manually by simply turning a Puller-Knob on the interface that shows where the focus-plane is and the DOF falls off on either side of it. Using simple tracking-algorithms you could set it to follow an object.

    Pro’s against the Andra-system: No need for tracker-objects that could end up in shot or get forgotten.

    Cons against the Andra-system: It’s only one horizontal plane in this implementation. So in the case of the gun snapping into frame as in one of the example-videos, It could just go over the sampled line and get ignored by my system… But think of the possiblities of using a full 3D-scan like what the Kinect Does. That would make things even better (though, there is some-lag-issues to contend with)

    Oh, and if you want to build my idea, mail me for contact and contract-informations ;)

  • Do you know the difference between red wine glasses, blown by the mouth of a experienced glassblower and the industrial glasses “Made in China”?
    I know!

  • Mallikarjuna Nayak on 04.19.14 @ 11:11AM

    Andra is a follow focus good

  • I was a camera assistant in an earlier life and this system appears to accomplish rapid focus pulls on a long lens (VERY hard to do) better than any human could achieve. Even if the rest of the show were manually focused, the long lens shots with fast moving actors who aren’t machines and can’t always exactly hit their marks would be well accomplished without a great deal of ‘retakes for camera’. It might also mean being able to shoot with narrower depths of field and lesser illumination levels which means saving on lighting kits.

    The fluid pull could easily be accomplished by having a sensor on each actor or target and either
    - Automate the pull with a combination of duration and curve (ease, ease-in, ease-out, linear etc.), or

    - Set up a set of sensor icons on the iPad, perhaps arranged in anticipated action order with sliders between them and allow the focus puller to start on sensor 1, then slide to sensor 2, then back to 1 etc as they see fit. If there are not too many sensors the operator may be able to do this to seamlessly blend focus with action.

    A side effect of this would be to record the state of focus, zoom (f/stop?) of the lens against each frame to a memory stick. This would be a fabulous way to accurately export such data to VFX and post who are trying to track the camera move with a view to adding set extensions etc.

    Thirty or so years ago Elicon (who won a SciTec for a follow focus system for optical printers) were looking at doing this. I don’t recall it happening though.

    Great idea