I'd propose something on the other way around.
How to make the life of a data scientist, who is working on Spark Shell/R, much easier, by allowing him to consume the corporate data from the SAP systems residing on HANA in a transparent and performatic way, allowing spark to push down any operations between HANA tables (joins, aggregations, filters) down to the HANA SQL/Calc engines.
Next step, you can explore more complex scenarios, for example, imagine you have a data set spamming through HANA & HDFS. You might have for example some new customers coming through HANA-based systems and some new customers entering via Hadoop, and you want to use both customers into a customer scoring algorithm. You could push down the same scoring algorithm/logic to both HANA & HDFS and apply the logic locally instead of moving all data to Spark and apply the logic centrally.
That's all possible with HANA Vora.