New Perspectives Blog Series: Digital Transformation with SAP S/4HANA

Welcome to TruQua’s New Perspectives blog series.  This blog series provides a platform for our consultants to share their perspectives, insights, and experiences on emerging technologies such as SAP HANA and SAP Analytics Cloud. 

Today we are speaking with TruQua Senior Consultant, Trainer, and Speaker, Matt Montes. Matt specializes in digital financial transformation, using the Central Finance deployment option to help customers leverage the financial and controlling components SAP S/4HANA without disrupting their current landscape.

 

 

Perspective 1: Looking Back to ECC

JS Irick: Matt, you represent a new generation in that you started your consulting career just before Simple Finance was released, so you are one of the first industry experts who “Grew Up” with S/4HANA. Obviously, you spend a lot of time with previous ERP versions working with Central Finance, as well as needing to communicate effectively with business stakeholders who are in the system day-to-day. There are hundreds of articles about what people with extensive ECC 6.0 think of S/4HANA, but I’d like to flip that on its head. What is your perception of ECC 6.0?

Matt Montes: This is a great question, especially in the context of how my SAP career started. My first implementation as an SAP consultant was actually a BPC Standard (10.0) Financial Consolidation project for one of the largest utility companies in Texas. To provide some context here, this project dealt with 55+ company codes and an enormous amount of master data. As you can imagine, I was in SAP terminology overload and immediately overwhelmed. Insert ECC6.0. Once I gained access to ECC I realized that all the information I needed was at my disposal, and quickly began analyzing and dissecting the Enterprise Structure.

From here I was able to better understand how SAP modeled core FI concepts (Company Code, G/L Account, Version) and tie them back to BPC. This was my lightbulb moment per se, which allowed me to start connecting the dots between the various financial modules and their integration points.

Taking what I learned in FI and figuring out how that tied into the Controlling and Logistics modules allowed me to gain a more holistic understanding of ECC. ECC to me is a revolutionary ERP that can be strategically leveraged to improve financial proficiency throughout an organization. Today’s strategic conversations are hyper-focused on S/4HANA being the next generation ERP, specifically for those organizations looking to undergo a financial transformation and gain the benefits of a relational database, real-time centralized financials, and a unified financial table (ACDOCA). However, I will argue that without the core concepts and modules within ECC, S/4HANA wouldn’t be as profound as it is today. This is bolstered by the fact that when I deliver customer demos of S/4HANA I tend to focus on the Fiori user experience, as this is an added benefit to the core concepts and modules that the customer is already used to in ECC” Ultimately, my perception of ECC is that it is the foundation for what an ERP should be, while simultaneously allowing S/4HANA to build upon that core foundation and deliver additional value.

JS Irick:  I love the positive take.  ECC stands as a true technical accomplishment, weathering major infrastructure, network, architecture and industry changes. It is refreshing to hear someone talk about the benefits of S/4HANA without denigrating the parts of ECC that have aged poorly.

You talk about three key benefits – being on a relational database, real-time centralized financials, and the Universal Journal.  Frankly, all three are critical, can you dive down on one of them a bit more to give our readers some more insights?

Matt Montes: Absolutely. As you are aware, financial reporting in ECC today can be quite convoluted as there are a variety of tables that house actuals data (BSEG, BKPF, COEP, COBK, FAGLFLEXA, etc). This often leads to a BW driven reporting solution with multiple cubes pointed to multiple tables, and data is loaded via batch jobs. The Universal Journal (ACDOCA) is a game-changing concept as it allows enterprises to persist all of their actuals data in one central table, which is a stark contrast to the complexity associated with a BW reporting solution. In addition to housing actuals data in one universal table, the data is also written to ACDOCA in real-time, which eliminates the need for batch jobs and provides the business with quick access to their data. Centralized reporting and real-time data can significantly increase how businesses operate and conduct their financial reporting and analytics.

 

 

Perspective 2: Hidden Advantages of SAP S/4HANA

JS Irick: With its strategically limited scope and replication-based architecture, Central Finance implementations are an order of magnitude faster than a full-blown S/4HANA rollout. This means users get into the system on live data fast. Based on the compressed timeline, I would expect the transition to the Universal Journal and Fiori based applications to be even more refreshing. Everyone’s S/4HANA talk track seems the same “single source of truth, profitability on all dimensions, role-based mobile enabled user experience, etc. etc.”. What are some of the S/4HANA advantages in process, UX, reporting or operation that doesn’t get enough press in your opinion? Sell me on S/4, “What nobody mentions, but you’ll love is ______”.

Matt Montes: There are some benefits of S/4HANA that are going to steal the headlines, such as increased performance, coding block simplification, digital core, and unified financials. However, those benefits barely scratch the surface on what makes S/4HANA the ERP of the future. Fiori gets a lot of praise, rightfully so, but the praise usually highlights the tile-based user experience or mobile accessibility. What isn’t as talked about is the innate drill-down functionality and linkage amongst the financial tiles.

For example, within Fiori, a financial analyst can select from a variety of tiles such as display financial balances, display financial line items, manage journal entries, or journal entry analyzer. However, if they were to start with the financial balances tile (summary level data), they can seamlessly drill-down to the line items without having to open another tab or access a different tile. Furthermore, from the line item tile, the end user can now manage journals from a specific document or run a journal entry analyzer report. To me, this adds an enormous amount of value for the financial analyst or accountant who needs quick and easy access to their data. When accessing data via ECC or S/4HANA you are susceptible to multiple sessions and multiple TCODES to access data, and this isn’t efficient nor effective. Everybody talks about “single source of truth”, but doesn’t mention the operational benefits of having an actual, plan, and consolidated data spread across ACDOCA, ACDOCP, and ACDOCU. Think about the reporting landscape today for an ECC environment, which is almost always a batch process populating various BW cubes in which reports are subsequently built off of. In addition to the batch processing causing a delay in real-time reporting, there is also a concern of reconciling the data in ECC vs what was loaded into the BW cubes. The operational benefits of having three core financial tables with actual, plan, and consolidated data persisted in real-time, which can then be queried through Analysis for Office, Fiori, or CDS views is unquantifiable. S/4HANA’s value proposition lies with its ability to integrate a multitude of technologies and software’s into one all-inclusive ERP that meets statutory, management, and internal requirements.

JS Irick: That’s a great way to describe the tangible benefits of the improved user experience. I hear “Role-Based UX” so often, but you’ve really explained what that means in terms of the user experience within discrete processes.

I’m a huge fan of the Launchpad UX in S/4HANA and SAC. Not only for the simplified way to change your shortcuts/tiles, but also the fact that you can introduce key metrics into your Launchpad. A question for you – Have you seen much adoption of the Fiori KPI tiles? To me, they’re one of the 4 pillars of S/4HANA reporting (along with AO, SAC and Fiori Reports), but I don’t see them mentioned very often.

Matt Montes: This is a great point JS, I haven’t seen much adoption of the Fiori KPI tiles; however, I do have some hands-on experience using them. In one of my previous engagements, we leveraged Fiori KPI tiles and SAP Smart Business to create utility-centric tiles. In the utility industry companies have to report two sets of financials, one for GAAP accounting and another to abide by the Federal Energy Regulatory Commission (FERC) standards. Within the FERC reporting requirements are metrics like Rate of Return, Rate Base, Return on Equity, Capital Structure, and Revenue Requirement. Each of these metrics is vital to a utilities regulatory decision-making process, as well as determining the amount of money they can recoup on their infrastructure-based investments (Power Plants, Natural Gas Pipelines, Water Treatment Plants, etc). The Fiori KPI tiles and SAP Smart Business enabled us to create a utility-focused dashboard within Fiori that gave business users a medium to analyze their regulatory data. I think as Fiori continues to evolve, we will start to see a lot more integration of the KPI tiles, specifically as companies work towards building an S/4HANA environment with actual, plan, and consolidated data.

 

 

Perspective 3: Upskilling for the Future

JS Irick: In my opinion, your dedication to educating and training our Central Finance customers is crucial to our project success, and one of the reasons your customers are so happy with the final result. Any major technology disruption can be really scary to IT organizations because their day-to-day requirements don’t slow down, even as this major shift in their role is on the horizon. What’s your actionable advice for gearing up for S/4HANA, especially on a tight schedule?

Matt Montes: You bring up a really good point here. Digital transformations or changes in technology can be very demanding on the IT organization. Asking someone to acquire a skillset in new technology while simultaneously maintaining their day-to-day duties can be incredibly cumbersome. My advice would be simple, understand your organizations business processes and financial data model as best you can in order to reduce any uncertainties and close any knowledge gaps. Said more precisely, understand what modules of your ERP are being utilized and what data is associated with those modules. As an SAP consultant, we often analyze business processes, master data, and configuration to gain an understanding of an enterprise. Therefore, IT organizations can also embark on a similar exercise to analyze and digest the business processes, master data, and configuration that make up their organization. This understanding helps to eliminate any uncertainties and close any potential knowledge gaps as the gearing up for S/4HANA occurs. Another approach to gearing up for S/4HANA is to undergo a POC that is designed to address the potential uncertainties or gaps that may exist within the ERP. POC’s can be extremely informative if positioned correctly, and they help to ramp up the end users by giving them hands-on experience prior to the full implementation. A majority of the configuration and master data that exists within ECC will be leveraged and reimplemented in some form or fashion into S/4HANA; therefore, truly understanding the inner workings of your ERP will help to reduce some of the barriers to entry that come with S/4HANA.

Another approach to gear up for S/4HANA is through training or knowledge transfer sessions. At TruQua, we understand that a lot of the changes that come with a digital transformation often produce more questions than answers, so we have developed a repository of training material known as TruCert. The TruCert material can be compiled and constructed to address one technology in detail, or multiple technologies at a high-level. In conclusion, understanding your ERP at a process level, gaining hands-on experience with S/4HANA, and ramping up on your S/4 knowledge are all ways in which the digital transformation to S/4HANA can become less intimidating.

JS Irick: “Truly understanding the inner workings of your ERP will help to reduce some of the barriers to entry that come with S/4HANA” – I think you’ve really hit the nail on the head here. Those who truly understand their current system are indispensable in a Financial Transformation. Can you lay out a couple of areas where customer knowledge is crucial? I know that’s a super broad question, but I’m looking for something along the lines of “Understanding Customer/Vendor enhancements vis-à-vis the move to Business Partner”. 

Matt Montes: Customer Vendor Integration (CVI) to Business Partner is definitely a big component of the transition to S/4HANA; however, for me the most important area of customer knowledge lies in the ability to understand and speak to the cost flow and end-to-end business processes associated with your ERP. It is one thing to understand your ERP at a process level, but even more important is understanding what master data and configuration points are being leveraged throughout those processes. It is critical to not only understand how things are done, but why they are being done as well. For example, a quick analysis of an ERP can uncover the fact that certain costs booked to sending Cost Centers are ultimately settled to a Fixed Asset; however, understanding why those costs are being settled and what configuration is dictating that settlement is exponentially more valuable. To me, this type of skill-set and knowledge can be the difference between a good implementation and a great one.

Matt Montes, thank you for taking the time to speak with me today. S/4HANA and Financial Transformation are such broad topics, with so much content published daily, that it is really refreshing to sit down and speak with an expert about a few key areas. All the best in your upcoming projects and I look forward to speaking with you again.

About our contributors:

JS Irick

JS Irick

Director of Data Science and Artificial Intelligence

JS Irick has the best job in the world; working with a talented team to solve the toughest business challenges.  JS is an internationally recognized speaker on the topics of Machine Learning, SAP Planning, SAP S/4HANA and Software development. As the Director of Data Science and Artificial Intelligence at TruQua, JS has built best practices for SAP implementations in the areas of SAP HANA, SAP S/4HANA reporting, and SAP S/4HANA customization.     

Matt Montes

Matt Montes

SAP Financials Senior Consultant, Trainer, and Speaker

Matt Montes Montes is a Finance and Data-driven professional currently working as an SAP Financials Senior Consultant. He received his education from the University of Texas at Austin with a degree in Finance and a minor in Accounting. His professional experience is comprised of financial analysis, budgeting, consolidation, accounting, and data analytics. Matt Montes has 4 years of financial consulting experience utilizing Central Finance, S/4HANA, ECC, Fiori, and BPC. In addition to 2 full lifecycle implementations, product development, and 6 POC’s, Matt Montes also works as a trainer and content developer. Matt Montes has conducted two internal and external training covering Central Finance, Group Reporting, SAC, and Reporting & Analytics. Additionally, Matt Montes has written and delivered two blogs, a webinar, and a 3-part demo series on Central Finance.

Thank you for reading this installment in our New Perspectives blog series. Stay tuned for our next post where we will be talking to Recruiting Director Stephanie Hettrich on identifying and developing young consulting talent.

For more information on TruQua’s services and offerings, visit us online at www.truqua.com. To be notified of future blog posts and be included on our email list, please complete the form below.

 

TruQua Expands its Thought Leadership with Key Strategic Hires

Chicago, IL USA – May 3, 2019 – TruQua Enterprises LLC, a leading Finance and Business Analytics software and consulting services provider, is proud to announce the recent hires of Kirk Anderson and Leo Schultz from SAP and Parminder Ghatahora and Marc Six from the Big Four consulting firms.

“We believe in quality over quantity, especially for finance transformation initiatives at our customers,” says David Dixon, co-founder, and partner at TruQua. “The larger the change and project, the greater the need for top-tier talent and leadership to catalyze change, otherwise, you risk a big and hard fall.”

TruQua’s strategic new additions and their roles are:

Kirk Anderson who was a leading authority at SAP as a Chief Solution Expert and VP of Solution Management. He is widely recognized as a leading financials expert both at SAP and at customers with over 25 years of deep SAP experience including the specialized areas of corporate financial reporting, operational and tax transfer pricing and financial planning and analysis. Kirk is a Chartered Accountant and worked a number of years in public accounting prior to joining SAP.  He joins TruQua as the Chief of Financial Solutions bringing his SAP relationships and insights into SAP S/4HANA.

Leo Schultz who was the lead Solution Engineer for SAP S/4HANA Finance for North America at SAP. During early customer engagements in the sales process, he provided finance expertise, business case development support, benchmark advisory consulting and system demonstrations. He joins TruQua with over 25 years of experience in Finance and Accounting. At TruQua, Leo Schultz is the Senior Vice President of Global Sales.

Parminder Ghatahora was a Director of Advisory at a Big Four consultancy. He led the North American Consumer Markets Enterprise Performance Management practice in engagements that assisted clients’ development of strategic vision, solution implementation and performance improvement enabled by SAP in the area of finance. that assisted clients’ development of strategic vision, solution implementation, and performance improvement enabled by SAP in the area of finance. He brings over 20 years of experience in the finance and technology space and is a member of the Chartered Institute of Management Accountants. Parminder joins TruQua as a Principal and Practice Lead.

Marc Six who was a leading SAP S/4HANA and SAP Financials expert in the Enterprise Solutions practice at a Big Four consultancy. Prior to that, he was a specialist in the SAP Max Attention team working on the earliest implementations of SAP S/4HANA Finance and Central Finance including co-development projects. Marc joins TruQua as a Principal and Practice Lead.

Kirk Anderson, on his rationale for joining TruQua, explains, “Having had the good fortune to work with TruQua in the past, I knew that joining their team would afford me the opportunity to work with a great group of highly skilled and creative people who are truly dedicated to the concepts of quality, integrity and commitment, both to customers and their employees. This is a company where I feel my contribution can really make a difference.”

Scott Cairncross, co-founder, and partner at TruQua, concludes, “Our team is really excited to welcome this exceptional group of individuals to our smart, driven and nice culture. And our customers benefit not only from an expertise perspective but, more importantly, the deep, meaningful and long-lasting relationships that our all-stars are able to foster.”

 

ABOUT TRUQUA ENTERPRISES
TruQua is an SAP service and software development partner specializing in software solutions, project implementations, and deployment strategies for SAP S/4HANA, SAP Financials, SAP Analytics, and overall SAP Platform and Technology solutions. TruQua continues to influence the direction of these products and build custom solutions to bridge any gaps in product functionality. TruQua also actively helps SAP customers derive value from maturing areas such as Machine Learning, Artificial Intelligence, and Blockchain. For more information, please visit www.truqua.com or follow us on twitter @TruQuaE.

Marketing Contact:
Allison Murtagh, Marketing

TruQua
312-585-8787
allison.murtagh@truqua.com

 

New Perspectives Blog Series: Bringing AI to the Realm of Corporate Finance

Welcome to TruQua’s New Perspectives blog series.  This blog series provides a platform for our consultants to share their perspectives, insights, and experiences on emerging technologies such as SAP HANA and SAP Analytics Cloud. 

Today we are speaking with Senior Cloud Architect Daniel Settanni. Daniel’s passion is bringing AI to the realm of corporate finance. With his deep expertise in time series analysis, cloud architecture and SAP planning solutions, Daniel is able to not only create impressive Machine Learning models but also deploy them in an integrated, accurate and explainable way. As you will see below, Daniel does an exceptional job at both demystifying the technologies behind AI/ML and “defeating the black box” through explainable AI.

Perspective 1: Getting Started with Machine Learning for Finance

JS Irick: Daniel, a few weeks back you and I hosted a pre-conference session at SAP-Centric where we led attendees through building their first Machine Learning model to predict OPEX (based on published financials for a Fortune 500 company).  In that case, it was easy for our attendees to be successful because the problem was already defined. When it comes to the enterprise level, identifying the opportunity and success criteria is the first major roadblock customers face. Can you talk a little bit about how customers can set themselves up for success in their foundational ML projects?

 

Daniel Settanni: Sure JS.  Figuring out where to start with ML can seem like an insurmountable challenge and for good reason.  ML and AI pop up in the news every day.  Topics like ML generated movie recommendations and AI created art are cool, but not relatable to most enterprises.  On the flip side, Enterprise Software usually incorporates ML/AI in tasks that are common across wide swaths of the market that focus on automation.  These are much more relatable, but don’t speak to how Enterprises can use ML to solve their own problems… from the mundane to the highly strategic.

So, now to your question. 

If an Enterprise is just starting out with Machine Learning and is having a hard time finding the right opportunity, then pick something relatively simple like a basic forecasting project.  These are a great place to start because the models can be kept simple, the predictions can be easily integrated with the existing business processes, and the results deliver real value (I haven’t met an Enterprise yet that isn’t looking to improve their forecasting accuracy in one area or another).  But above all, projects like this provide real-world experience with the ins and outs of ML – plus they always generate a ton of new ideas on where to use ML next.

If the Enterprise has already identified the opportunity, then I’d make sure that their success criteria include delivering a way for users to interact with the model.  This could be as simple as pulling predictions into an existing business system as a new Forecast version or entail developing a custom dashboard for what-if analysis.  In any case, if the success criteria is simply to build an accurate model that never sees the world beyond the data science team, they will be losing out on the vast majority of ML’s true Enterprise value.

JS Irick: “But above all, projects like this provide real-world experience with the ins and outs of ML – plus they always generate a ton of new ideas on where to use ML next.”  That’s definitely my favorite part of working on these projects with you. We get to see the lightbulb go on in real time which leads to fascinating strategic discussions. I believe that best consultants not only help their clients with their current project, but they also help chart the way forward through education and enablement.  

Can you talk a bit more about “interacting with the model”, with some examples? I think this is important for folks just getting started with AI/ML.

Daniel Settanni: Absolutely. The main point here is that the more completely people can interact with something (an ML model in this case), the more they will understand it and the greater the understanding, the greater the potential for meaningful insights.

This “interaction” can look very different depending on the business problem being solved.

For example, if we built out a Forecasting model the minimum level of interaction would result in a spreadsheet.  This isn’t a great option for a lot of reasons, the most basic of which is that it’s not in the same place as the related business data.

We can fix that by integrating our hypothetical Forecasting model with the related Enterprise Application.  Now the forecasts can be viewed in context, but there isn’t any transparency around how the model came to the conclusions it did.  The best case here is that the forecast is proved to be accurate at which point the business will just accept it – but being overly reliant on a model in this way is dangerous.

So, next, we’ll add explainability to our model.  Based on this, our analysts gain insight into not only what the model predicted, but what why it arrived at the answer as well.  Since our analysts are the true experts, this can lead to valuable feedback on ways to make the model better.  Because there’s transparency, it can also become more of a trusted tool.

We’ve made a lot of progress from our spreadsheet, but we don’t have to stop there.  We could make the model truly interactive by allowing analysts to tweak its assumptions and see its impacts on the prediction and explanations.  At this point, you have what I like to call a Strategic model, one that can aid in making business decisions.

Before moving on, I’d like to highlight another example to show how this methodology can be applied to other areas.  As you know, we built out an Employee Retention model last year that was integrated with SuccessFactors.  The basic output of the model was the likelihood an employee would leave during the next year.  The predictions were based on factors like salary growth, time at the company, historical promotions, etc.

To make this model most valuable, we didn’t stop at the raw prediction.  We created a dashboard where HR analysts could actually predict the impact of interventions, greatly increasing the chance they could retain their top talent.

These are just a few examples of why I believe interaction is one of the core pillars of a successful, and valuable, ML-based solution.

Perspective 2: From Prediction to Explanation to Intervention

JS Irick: From my work in medical science, I’ve always felt that researchers first seek to understand so that you can intervene. While the moment of discovery is exciting, it pales in comparison to using that discovery to impact change. Machine Learning changes the paradigm a bit, in that first you predict, then you explain, then you intervene. This leads to two questions – first, how do you get predictions into analysts’ hands quickly enough so that there is time to intervene? Second, can you explain to our readers how you go from a raw numerical prediction to actionable insights?

Daniel Settanni: You bring up some great points here.  Without insight, there can be no action and without action, there can be no value. 

The answer to the first question is easy to answer, but not always simple to implement.  To get predictions into Analysts hands quickly enough to intervene, a model must either be integrated with their business system or directly accessible in some other way.  If Analysts have to wait for quarterly or even monthly reports, then they’ve probably missed their chance to act.  On the other hand, allowing them to perform some degree of what-if analysis in real time can put them dramatically ahead of the curve.

One quick anecdote before I move on to question number two… in my experience, the initial success criteria for an ML project is accuracy.  This makes complete and total sense but once you deliver an accurate model the next logical question is “Why”?  No one likes a black box, and even more importantly, no one trusts a black box.  Without some degree of understanding, trusting the results of an ML model can feel like a leap of faith and who wants to bet their career on that?

So how do you get from raw numerical predictions to actionable insights?  It starts with deeply understanding the problem you are trying to solve and building your model around that (instead of just accuracy).  This involves carefully selecting features (model inputs) that are related to the question at hand.  Having relatable features can give analysts some confidence in a model, but adding in Explainable AI, a technology that figures out each features contribution to a prediction, can really deliver the trust needed to go from prediction to action.

JS Irick: Without getting too deep into the technical side, can you talk a bit more about feature selection? In a lot of ways, I lean on my research experience; which means I focus on explaining statistical malpractice “this will pollute your results because….”. I’d love to hear a positive, actionable take on the value of feature selection.

Daniel Settanni: You’ve picked a topic close to my heart. Before diving in, here’s a quick recap on what a feature is.  In its most basic form, a Machine Learning model uses a bunch of inputs to predict an output. In Data Science lingo the inputs are called features and the output is called the label.

There’s one more topic we need to touch on before we can really talk about features… the different reasons models are created in the first place.  This may sound like a silly question – we create models to predict things of course! 

But… there are different types of predictions.

For example, I may want to create a model that can remove human bias from my Sales forecasts, or I may want a model that can accurately predict the impact of a certain business decision to my Sales forecast.  In both cases, we’re forecasting the same thing, but our goals are very different. 

How can we do this?  The answer lies in feature selection. 

In the first scenario (where we want to remove human bias), we would focus on factors outside the scope of control of the business.  These would likely include macroeconomic data, sales trends, consumer (financial) health, and the like.  By training a model with this type of features, we would be capturing the historic performance of a company.  These types of models tend to be very accurate, especially for mature organizations.

In the second scenario, we want to do pretty much the opposite.  Instead of focusing on things we can’t change to capture historical performance, we look at the things we can – so it’s much more of a microeconomic viewpoint.  By adding explanations to the solution, a model like this can empower decision makers to get accurate information on the impacts to the bottom line of many decisions.  That said, this model is going to be extremely vulnerable to human bias so while it can be an amazing strategic solution, it isn’t a great pure forecasting one.

And there’s no law that says all features have to be macro vs microeconomic.  In fact, many are mashups if you will.  So ultimately the key isn’t to match the features to what you’re predicting, it’s to match the features to the question you’re trying to answer.

Perspective 3: Into the Wild

JS Irick: As you know, many wonderful forecasting tools never make it out of the “validation” phase. Integration, maintenance, retraining, and continuous validation are critical for the long term health of any project, but especially an ML/AI project. Unsupervised predictive models tend to fail “silently”, in that there’s no production down moment. Our product Florence is one way for customers to ensure not only the best practices in model development but also long term model health. Can you talk a little bit about the challenges customers face and how Florence solves them?

Daniel Settanni: Glad to.  ML/AI projects often focus, in some cases solely, on building an accurate model.  This is a fine approach if you’re in a scientific setting, but in the Enterprise simply building the model is only a small piece of the puzzle. 

To get the most value out of an Enterprise ML project, it has to be:

  • Accurate
  • Interactable
  • Explainable

A model alone can only deliver on accuracy. 

To be interactable, the model has to be accessible in real-time.  This means it has to be deployed somewhere and either integrated into an existing system or a net new application has to be created. 

To be explainable, the appropriate technology must be deployed alongside the model and integrated into the prediction process.

The challenges that come with making a model interactable and explainable are considerable and often require ongoing collaboration with the DevOps and Development teams.  I highlighted “ongoing collaboration” because this is commonly missed cost/risk.  During the lifetime of an ML/AI project, its’ model(s) will likely have to be retrained many times.  When a model gets retrained, the data preparation steps often have to update, and when that happens corresponding changes have to be made by the DevOps and Development teams.  The worst part is if the changes aren’t made exactly right the models will keep on delivering predictions.  They’ll just be less accurate, probably way less accurate.  And if you’re making decisions off of those predictions, that could be very costly.

Most solutions only deliver on a few pieces of an ML/AI project, leaving it up to each customer to figure out everything else.  We took a very different approach with Florence.

Florence covers the entire process, from creating accurate and explainable models to make them available in real-time, to provide the APIs and security necessary to integrate with practically any Enterprise system. 

One of my favorite technological advances is the way Florence abstracts away things like data preparation, so all the Developers have to focus on is creating the best user experience and users can be confident that predictions aren’t wrong due to integration issues.

JS Irick: Excellently put.  I’m a big believer in Eric Raymond’s “The Art of Unix Programming”. I find that the rules still hold up (also, it’s interesting that some of the proto-ML techniques are coming back into vogue). Some of the rules speak strongly to the strengths of ML – “Avoid hand-hacking; write programs to write programs when you can” and “Programmer time is expensive; conserve it in preference to machine time” come immediately to mind.  However, you’ve touched on something that shoots up red flags – “When you must fail, fail noisily and as soon as possible”. Some of the toughest technical issues we face come when a system is failing silently, producing unintended consequences downstream. Especially when it comes to algorithms whose results are making financial decisions. Ask any futures trader, they’d rather the system crash than give incorrect responses due to a bug.

You hit the nail on the head when you noted that Florence applies the necessary data prep on the decision side as well as the training side. If numbers need to be scaled, normalized, etc. that should absolutely be on the server side. As a user, I get so salty when I hear things like “Oh, you forgot to turn your image into an array of integers before submitting it”. Let people speak in their language, and if there’s any data prep that needs to be done, it needs to be done in an abstracted, centralized way.

You’ve been doing some tremendous UX work in Florence recently, got a teaser for us?

Daniel Settanni: I’ve got the perfect images for this conversation.  It’s of Florence’s model validation view for a Macroeconomic model. 

The first screenshot shows the incredible accuracy we obtained, but perhaps, more importantly, the explainability Florence delivers.  Information like this can drive extremely valuable insights, and with Florence it doesn’t come with any additional work – it’s baked right in.

Thank you so much for spending time with me today Daniel. I always learn a tremendous amount when we speak, and even better, I get fired up to build new things. Hopefully, our readers were both educated and inspired as well.

Daniel and I consistently share articles/podcasts/news on AI/ML topics, and we’d love it if you all joined the conversation. Be on the lookout for our upcoming weekly newsletter which will go over the most interesting content of the week.

About our contributors:

JS Irick

JS Irick

Director of Data Science and Artificial Intelligence

JS Irick has the best job in the world; working with a talented team to solve the toughest business challenges.  JS is an internationally recognized speaker on the topics of Machine Learning, SAP Planning, SAP S/4HANA and Software development. As the Director of Data Science and Artificial Intelligence at TruQua, JS has built best practices for SAP implementations in the areas of SAP HANA, SAP S/4HANA reporting, and SAP S/4HANA customization.     

Daniel Settanni

Daniel Settanni

Senior Data Scientist and Cloud Architect

Daniel Settanni is living the dream at TruQua, using innovative technology to solve traditionally underserved Enterprise challenges. Making advanced technology more accessible with thoughtful design has been Daniel’s passion for years.  He’s currently focused on Florence by TruQua, an innovative Machine Learning solution that delivers Interactive and Explainable AI in a fraction of the time, and cost, of any other product on the market.

Thank you for reading this installment in our New Perspectives blog series. Stay tuned for our next post where we will be talking to Senior Consultant Matt Montes on Central Finance and the road to S/4HANA for mature organizations.

For more information on TruQua’s services and offerings, visit us online at www.truqua.com. To be notified of future blog posts and be included on our email list, please complete the form below.

 

Machine Learning…Demystified

Daniel Settanni is a Senior Cloud Development Architect at TruQua Enterprises.

Artificial Intelligence (AI), Machine Learning (ML), Predictive Analytics, Blockchain – with so many emerging technologies (and the associated buzzwords), it can be a challenge to understand how they can fit into your business. Here’s a short primer to help anyone new to the topic make sense of Machine Learning in the Enterprise.

 

What is Machine Learning?


There are quite a few technical definitions of machine learning, but they all boil down to the same concept: Machine Learning is a technique that uses advanced math to learn how different pieces of data are related to each other.

Three of the best ways to use ML in the business world are:

  • Regression models learn how to make numeric predictions
  • Classification models learn how to classify samples into different known groups
  • Clustering models learn how to group samples all by themselves, i.e., without a predefined set of sample groups.

Let’s run through an example of when each could be used.

 

Predicting Sales Volume, a Regression example


Almost any organization that sells something goes through a process where they forecast future sales.  This process usually entails analyzing some mix of historical performance, external influences (market conditions, etc.), and internal strategy (pricing changes, etc.)

The forecasts are often performed by different groups, like Sales and Operations, and then passed to the Management team for review.

So why might we want to use Machine Learning to forecast sales instead of staying with the existing procedure?  One reason is to minimize bias.  In this case, the bias we’re trying to remove is a human one.  Forecasts have consequences that impact the very people creating the forecasts, so it’s only natural for bias to creep in. 

By creating a Machine Learning model, we could remove the human bias and introduce additional value-adding features like explainability (why the Machine Learning model is predicting a number) and what-if analysis (what would happen to our sales forecast if one or more inputs were to change).

 

Detecting Fraud, a Classification Example


We live in a period of rapidly evolving, and expanding, Financial solutions targeted directly to consumers.  These solutions are collectively known as the FinTech industry, and although each company’s offerings are different, they all share a common problem… Fraud.  And this problem is by no means a new one.  Fraudulent transactions originating within a company can be an even bigger problem than by outside actors. 

So, what’s the best way to start solving the fraud problem? 

We could hire a huge team of analysts to comb through every transaction looking for tell-tale signs of fraud.  This solution is doomed to fail.  It would be expensive, have varying effectiveness (depending on the analyst), slow down transaction throughput, and maybe worst of, do nothing about fraud originating within a company.

Another idea would be to have a smaller team analyze past fraudulent transactions, creating tests that could be used to programmatically check for fraud.  This is certainly a better option, but with transaction volume (and the risk of internal fraud), it also has a number of drawbacks.

Machine Learning could go much further in solving this problem.  By creating a model based on historical Fraudulent and non-Fraudulent transaction details, an organization could benefit from a mathematically sound analysis of much larger sets of data.  And, since creating a model is much less labor intensive, it could be recreated (retrained in ML speak) to more rapidly identify new forms of fraud. 

 

Customer Segmentation Analysis, a Clustering Example


In this age of technology, one thing organizations aren’t at a loss for is data.  They have so much data that it can be difficult, if not impossible to analyze it all in any traditional respect.  This can lead to huge potential losses both from strategic and monetary terms.

Clustering models are amazing in their ability to identify relationships within huge amounts of data that could otherwise go undetected.  One concrete example of Clustering is Customer Segmentation Analysis.  Here the model could group customers based on any number of patterns, such as purchasing behavior, product mix, and timing.  These insights ultimately lead to a better understanding of any organizations most profitable asset… their customers. 

 

Machine Learning Implementation Process


Now that you have a better understanding of what types of problems Machine Learning can solve, you might be wondering what it takes to implement. The basic process includes the following steps:

  1. Understanding the problem that needs to be solved
  2. Analyzing and preparing the data
  3. Creating an accurate model
  4. Integrating that model with existing systems and processes

Another key question you’ll need to ask is, who can do all of this? In some cases, a software vendor can deliver Machine Learning capabilities out-of-the-box. This works best when a problem is well defined and common within a specific business or industry process.

For example, SAP’s Cash Management Application is a perfect example of a solution that can harness the full benefits of machine learning because Cash Management challenges across organizations are so similar.

But what if an out-of-the-box solution doesn’t exist? This is where you’ll need to go a step further and employ the skills of a data scientist and an area where TruQua can help.

Conclusion

Machine Learning provides incredible possibilities for an organization to not only automate Finance Processes (through categorization and regression) but also to discover new insights from their data (through explainable AI and segmentation).

A successful ML implementation requires a cross-functional team that takes a critical look at everything from data selection and preparation to the way the model’s insights will be used.  As such, it’s critical to not only pick the right problem but the right partner.

 

How TruQua can help


TruQua’s team of consultants and data scientists merge theory with practice to help our customers gain deeper insights and better visibility into their data.  Whether it is scenario identification, building out your Machine Learning Models or integration with Finance systems, TruQua has the tools needed for success. So, if you’re looking to make more informed business decisions utilizing the latest predictive analytic and Machine Learning capabilities from SAP we’d love to hear from you.  

 

 

New Perspectives Blog Series: Maximizing Enterprise Collaboration with SAP

Welcome to TruQua’s New Perspectives blog series.  This blog series provides a platform for our consultants to share their perspectives, insights, and experiences on emerging technologies such as SAP HANA and SAP Analytics Cloud (SAC). This week we are going to be speaking with Andrew Xue.  Andrew is a Consultant at TruQua, specializing in cutting-edge cloud analytic and planning technologies.  Today Andrew and I are going to discuss some of the key trends we are seeing at customers as it relates to Collaborative Enterprise Planning and the latest capabilities in SAP Analytics Cloud.

Perspective 1 – Creating organizational alignment with Collaborative Enterprise Planning

JS Irick:  I find Collaborative Enterprise Planning to be a really effective concept since customers intuitively understand what it means and are excited by the possibilities. In addition to SAP’s talk tracks of organizational alignment and breaking down silos, SAP Analytics Cloud allows stronger collaboration within teams, thanks to technologies like shared Private Versions and in-app chat.  Andrew, can you talk a little bit about the collaboration you are seeing on implementations both from a macro (organizational) and micro (team) perspective?

Andrew Xue: The two you mentioned, Private Versions and in-app chat, are just scratching the surface of how collaborative SAP has made SAP Analytics Cloud. With Private Versions, users can not only draft what changes they want to see without impacting real data but also utilize Private Versions to create instantaneous backups of real data as an unofficial checkpoint. With a Private Version, users may share them with other users or teams so that multiple people can collaborate on that information. Definitely beats sending out ten Excel spreadsheets at once!

Picture: An instance of sharing a private Budget version with Read and Write access to other users or teams

I also find in-app chat to be one of the most interesting features of the tool because it is the backbone of connecting users to each other, and then to their data. First, are stories: at a client, we created many stories that were designed as templates, such that users could not change those stories, but could save a copy as a starting point, or use it as inspiration. Once users were more familiar with story-building (graphs, buttons, tables, more), they could create as many as they wanted in their own Private Folders, such as one-off reports for checking on some data, drafting new reports, or testing a feature for potential new value. This is where it gets interesting because users could then share their private stories with their own reports built out to their superiors or teammates. With everyone the story is shared with, read-only or edit permissions can be set as well! Instead of endless back-and-forth emails, people can simply change a chart, and make a note of it in the team group chat! So far, customers have used this tool to both ask me how a certain feature works (and how it helps them), and then proceed to ask me when I would like to grab lunch.

Picture: From the chat menu, sharing is as easy as linking the story for anyone in the conversation to access

JS Irick: You’ve touched on a really interesting point when you mentioned both Private Versions and Private Folders. Let’s be real – Finance organizations are stocked with really smart people; if someone doesn’t have the tools needed to do their jobs, they’ll build it themselves. For a long time, organizations have tried to prevent “Shadow IT” by locking down the ability to save different versions or create untracked reports. Now SAP Analytics Cloud actually encourages good behavior by providing these capabilities in a centralized, organized fashion. It’s a great sign for the future that new features are designed for how customers use the tool.

 Andrew Xue: Of course, the people I have worked with are all extremely smart and very qualified, holding CPAs, MBAs, and more. Once taught how to use SAP Analytics Cloud, they’ve built tools beyond my imagination. Avoiding a “Shadow IT” or simply not having to go through IT for very small changes that don’t impact real data leads to eliminating so many inefficiencies and frees up so much valuable time for people to do more meaningful activities. But this isn’t a tool that gives free rein to all that use it, because SAC also gives IT the ability to enact specific governance via SAC Security in roles & teams, data locking, and more.

People are accountable for their private folders of what items they create, how they organize it, and who they share their creations too. This way, the public folders are saved for all reports, input templates, and more, instead of several drafts cluttering up.

 

Perspective 2 – Evolving business processes utilizing Native Analytics within Planning

JS Irick: The development path for SAP Analytics Cloud says a lot about the tool’s strength and SAP’s overall analytics strategy.  SAC started as Cloud for Planning (C4P), but the analytic components were so strong that the product became the go-to analytics tool for SAP as a whole. Planning applications have always had a begrudging tie to Microsoft Excel due to its calculation and display capabilities.  Now we’re planning within an Analytics and Visualization tool, which is a monumental shift. How are you seeing customers take advantage of these powerful capabilities, and how are they evolving business processes?

Andrew Xue: The largest shift as you said here is from Excel to SAC. From what I’ve seen in a certain costing process that my current client conducts, the task used to take 4, 12-hour days of everyone’s work. Now with the centralized collaborative tool in SAC we’ve created, the task takes a mere few hours, leaving so much more time for the users to conduct valuable analysis instead of checking if their formulas in Excel dragged down correctly, something that they simply had almost no time to do previously! Also, what the team finds to be so incredibly useful is how quickly they can run different scenarios and what-ifs, such as, how does changing a certain rate assumption affect total costs for a specific sector? Before while using Excel, if they even had time to, it would take a day or more just to edit multiple spreadsheets, confirm, run, and compare the changes. Now, that scenario can be done in about half an hour. In just one day they could be running tens of scenarios!

 

Along with that, the client team has access to so much more historical data at the tips of their fingers. They can compare past assumptions, run year-over-year analysis on a whim, and so much more. It previously took around 10 minutes just to get all the necessary Excel files open to building a chart, let alone compiling the data. As the team became more familiar with SAC, they also became more creative, making more than just tables and input templates. I have seen them make waterfalls, dynamic pie charts, and even utilized some of the instantaneous Smart Insights on their data! It has been wonderful seeing them realize how powerful this tool’s capabilities are.

JS Irick: The analysis tools and visualizations you’re creating for your customers are truly impressive. In particular, I loved seeing your profitability reports, that are blending the advanced analytic views, what-if analysis, and drill down to the customer/SKU level details. I’m curious though, how are you helping your customers get a handle on this incredibly versatile tool? Even at the most basic level, we’re going from a spreadsheet (which is easy to understand) to a story, which we could think of as multiple dashboards in the previous generation. So, we need to cover dashboard arrangement/design, dozens of new data visualizations, new data features such as Value Driver Trees and Data Actions. This isn’t just moving someone from PowerPivot to Analysis for Office, this is a monumental shift.  Can you share a bit of your training strategy with our readers?

Andrew Xue: By all means in any implementation and move to a new tool, there is going to be a lot to learn. From what I have seen, the most successful strategy is changing the customer’s high-level process as little as possible; no change is preferred. How we help them pick up this new tool is by reviewing their process first at a high level in the new tool, and then at a more detailed level in segments, starting from where the data is, all the way through creating their first reports.

What we have found especially useful is parallel pathing. Earlier when I mentioned a time reduction from several days to a few hours, this was not negatively impacted parallel pathing. In fact, running a process in conjunction only takes a little 3-minute detour every now and then to show them what their work looks like in the new tool now. Then, once they are fully familiar with their responsibilities in the new tool, we help them run another process, only this time we help them with our hands off.

Keys to a Successful Training Strategy with SAP Analytics Cloud

Perspective 3 – Starting a successful SAP Analytics Cloud implementation

JS Irick: TruQua recently presented at SAPInsider Financials 2019 on leveraging the SAC/BPC hybrid scenario.  One of the key points we focused on was the fact that performing a “Lift and Shift” (technical migration of your planning solution without making any modifications) ignores many of the features that make SAC such a dynamic environment. Since you’ve successfully worked with customers from both a Greenfield (first planning implementation) and Brownfield (improving an existing implementation), I would love to get your perspective on how customers can set themselves up for success in SAP Analytics Cloud.

Andrew Xue: I believe SAP Analytics Cloud is the perfect segment for customers seeking to make a change but aren’t too comfortable with a massive alteration. In many scenarios across every industry now, a significant portion (in some cases, all!) of business processes are on Excel. SAC gives users that platform to transition to while still retaining some Excel familiarity, so we’re not throwing them in the deep end at all. We can’t force customers to drop all they know from an old business process just to use a new tool that people tell them is better. And on top of that, to balance their responsibilities while completing weeks of training, it’s illogical. For customers with existing implementations, SAC is quite a different tool that serves to only improve.

In both the Greenfield and Brownfield scenarios, customers on SAC were able to continue their current business processes, while also picking up new features one at a time by simply being in the SAC. This wonderful tool helps the users themselves figure out what was inefficient about their old methods while also giving them new tools to take everything they already do to the next level. Success for the customer here is being willing to welcome this new tool and understand that the first steps are not big at all. Then, by using this tool more in their day-to-day work, customers develop better and faster approaches for gaining more valuable insight into their processes. With more experience in the tool, finding more features to augment their work, and easily adding valuable insights into their reports, customers are enabled to maximize the SAC’s value. The beauty here with SAC is that people who use the tool are able to transition away from traditional methods at whatever pace they feel comfortable.

JS Irick:  So, having been through multiple successful implementations, what are the areas where you are seeing maximum value?  Have there been any surprise wins that may inform the planning roadmap for organizations looking to improve their processes.

Andrew Xue: What I have been mentioning earlier leads up to this: the highest impact and biggest value-adds are shifts from heavily reliant Excel processes. This can be generalized to processes where the data is not centralized, and where changes are tracked through copies named v2, and so on. I agree with that combining different data streams is a surprise success. As I had mentioned earlier, one difficulty many clients face is comparing past data, as the very act of compiling could take hours. As you said, multi-source reports are a nice corollary of the implementation, which is exactly what we have seen. As the client got comfortable transitioning to the new tool and went live, they realized they could pull data from so many more sources, adding to the vast amount of data they already had at their fingertips. Whether this is combining multiple Excel documents, pulling from HANA or other sources, SAC centralizes it all for you to for undiscovered valuable new opportunities with the tool.

I wanted to add to that by saying, trusting the system to run the calculations for you is another surprise success. Users utilize the features in SAC such as Data Actions, Allocations, and more, running those calculations in the tool allow users to fully utilize the cloud resources and functionality. Using these calculation features are where a large portion of the time-saving comes from letting the system work.

JS Irick: Thanks so much for taking the time to speak with me today Andrew. With such an exciting, rapidly developing tool, I really enjoyed taking a step back and looking at the ways customers are generating real business value in SAC.

About our contributors:

JS Irick

JS Irick

Director of Data Science and Artificial Intelligence

JS Irick has the best job in the world; working with a talented team to solve the toughest business challenges.  JS is an internationally recognized speaker on the topics of Machine Learning, SAP Planning, SAP S/4HANA and Software development. As the Director of Data Science and Artificial Intelligence at TruQua, JS has built best practices for SAP implementations in the areas of SAP HANA, SAP S/4HANA reporting, and SAP S/4HANA customization.

Andrew Xue

Andrew Xue

SAP Consultant and Software Developer

Andrew Xue is an SAP Consultant and Software Developer at TruQua who specializes in cutting-edge cloud analytic and services technologies. Most recently, Andrew has been building and implementing cloud applications for financial planning, global budgeting, and executive reporting. He has expertise in an array of SAP enterprise collaborative planning technologies including SAP Analytics Cloud, SAP Business Planning and Consolidation and the SAP Digital Boardroom.

Thank you for reading this installment in our New Perspectives blog series. Stay tuned for our next post where we will be talking to Daniel Settanni on the role of Strategic AI in Finance.

For more information on TruQua’s services and offerings, visit us online at www.truqua.com. To be notified of future blog posts and be included on our email list, please complete the form below.  

 

 

Mapping Planned Data from SAP Analytics Cloud to SAP S/4HANA: A Short Tutorial

SAP Analytics Cloud (SAC) combines the flexibility of the cloud with the ability to run new deep-level analytics allowing greater insights into critical business processes. A powerful and agile tool for planning, analytics, and predictive insights, SAP Analytics Cloud enables full-fledged collaboration from anywhere in the world and provides real-time connectivity to on-premise systems such as SAP S/4HANA to removing traditional barriers and accelerating critical business processes.

This blog will walk through a Revenue and Cost Planning example and illustrate how to export and map that planning data from SAC into S/4HANA.

 

Planning in SAP Analytics Cloud: A Revenue and Cost Planning Example

With the release of SAP Analytics Cloud, enterprise planning has never been easier or more efficient.  With a wide array of planning capabilities and features coupled with the ability to separate hypothetical business scenarios from real data, companies are able to drive better business outcomes with collaborative enterprise planning while also gaining full alignment across all areas of the business.

In this example, you’ll see there are two planning-enabled tables located within a story in SAP Analytics Cloud. The charts on the right focus on gross margin by product and total revenue by product, which will update automatically as entries in the tables are adjusted. These entries can be made at a summarized node level by cost center groups, material hierarchies, or planned at the base level, by SKU or customer.

Source: TruQua Internal Systems

 

When changing the data, a draft version is created for the end-user to run their what-if planning scenarios on. This means that data changes made by the user will not be implemented until the user confirms and publishes the data. When publishing, users are provided the option to either confirm all their changes made into the model or revert to the original data. SAP Analytics Cloud is also able to highlight any data changes made to keep track of any modifications.

 

Source: TruQua Internal Systems

 
Exporting and Mapping Planned Data from SAP Analytics Cloud into SAP S/4HANA

Now that our we’ve run our what-if scenarios and published the data, we want to export that data into SAP S/4HANA. Thanks to the tight integration between SAC and SAP S/4HANA, we can write the new plan data directly to the S/4HANA planning table (ACDOCP).

 

Source: TruQua Internal Systems

 

Utilizing the easy-to-use mapping services from the SAP Analytics Cloud model, we can select what fields to map SAP Analytics Cloud dimensions to then assign those dimensions to S/4HANA fields. This is a one-time process to define or update the mappings and can be scheduled per implementation requirements.

Source: TruQua Internal Systems

 

Another point worth noting, is that within the SAP Analytics Cloud Dimension to S/4HANA field mapping process, users have the ability to create filters on their planned data as well. Meaning they be selective in the data that they upload, such as only uploading a certain period / year, or only a single cost center node. In the Versions and Categories, users may map one Analytics Cloud version to multiple S/4HANA Categories. In this example we see PLN version mapped on SAP Analytics Cloud to PLN, PLN01, and PLN02 in SAP S/4HANA.

Source: TruQua Internal Systems

 

Once the data has been successfully exported to S/4HANA, a quick ACDOCP query verifies our results.

Source: TruQua Internal Systems

 

SAP provides a wide variety of out-of-the-box sample content to inspire users on the endless creative possibilities found in SAP Analytics Cloud. With SAC’s connection possibilities (such as S/4HANA, BPC, BW, and more), systems integration previously limited to heavy IT involvement is within reach. For organizations looking to detach from a decentralized planning process, and reach collaborative enterprise planning, SAP Analytics Cloud is a perfect fit.

[This post is based on SAP Analytics Cloud version 2019.5.1 and S/4HANA 1809 SP01.]

Stay tuned for our next blog where we show you how this data can be consumed in Group Reporting and used for consolidation purposes. For more information on SAP Analytics Cloud, S/4HANA, Group Reporting, and how they can take your business intelligence to the next level, please fill out the form below.

 

English EN Português PT Español ES