Hello, friends! If you’ve made it past the headline, I’m going to assume you’re either a user of Einstein Prediction Builder or you’re fascinated by it and really really REALLY want your org to start using it. In either case, I feel comfortable referring to you as “friends.” (My birthday is November 6, in case you’re the type who sends birthday gifts to their friends.)
Because you’re already experts in Einstein Prediction Builder, I’ll keep my summary of that product very short. You set up a prediction, which looks at the fields on a record and predicts the eventual value of some other field (for example, OpportunityAmount) and places that predicted value in a custom field (for example, PredictedAmount). Every time a record is updated, Einstein evaluates the current state of the record against its Einstein model and updates the custom field to let you know how things are going to turn out in the end.
The truth about predicted custom fields
The custom field into which Einstein writes its predictions is known as an AI Prediction Field. Fields of this ilk are not like most custom fields. No, sir/madam.
As you are aware, creating or updating a record in Salesforce kicks off a whole lot of your own custom logic. This logic includes triggers, workflow rules, assignment rules, and roll-up summary calculations. And it all works together to keep your data clean and consistent and launch whichever notifications or other processes need to be launched. Collectively, we refer to all of this logic as the save order. Upon each write to the database, the save order gets assiduously followed so that your custom logic gets executed.
Updates to AI Prediction Fields will skip all of this save order logic.
I’m going to repeat that, because it’s so unusual that you may have just assumed I was trying to be funny, and moved along. But no. Updates to AI Prediction Fields will skip all of the save order logic.
“But why, Josh, why? I spent all of this time writing that logic, so why would you skip it? Don’t you like me? I thought we were friends.” OF COURSE, we are friends! (November 6…) Let me explain. We skip this logic because of the way the predictions get made.
Think of this from Einstein’s perspective. Einstein was very smart, obviously. He was all about calculating the speed of light and all of that, but even he would have told you that he couldn’t possibly be fast enough to think about your record update, make a prediction, and write that prediction before the record gets saved. (There’s probably some quantum physical paradox that would occur, but quantum physics is the one thing that my mortal mind refuses to wrap itself around, so I have no hilariously appropriate analogy.)
All of this predicting is happening asynchronously. A record update happens in your org, and this update is forwarded on to Einstein. Einstein makes a prediction and sends it back to your org. At this point, a second update is made, and it only updates the predicted field. The whole process might look really fast to you (because speed of light and all of that), but there is always a gap between the initial create/update and the prediction update.
Even though these are two separate transactions, they logically function as one. Einstein won’t update anything but the predicted field. Your logic won’t update the predicted field. (It could, but it shouldn’t. If it does, stop it!) You + Einstein = one complete transaction. But this complete transaction is actually two transactions from the database’s perspective. You’re probably now connecting the dots and figuring out where I’m going with this (very Einstein of you, BTW).
Your initial transaction will fire the save order and run all of the logic. The second Einstein transaction will not. Since these are logically the same transaction, you don’t need to run the logic twice.
There are a few good reasons for this. Efficiency, for one! If you have kids, you know how much fun it is to pick up their mess three times in a single hour (spoiler alert: zero fun). It’s exhausting to run the same process repeatedly when you really didn’t need to do it again. Also, you’ve written your logic to be smart, to only send notifications once and only update records one time. Right? Are you sure? Like, how sure? Instead of making you worry, we don’t run the logic twice on this two-phase transaction. Last but not least, we load predictions in bulk from Einstein to core, and those bulk transactions were timing out in our testing when all of the custom logic was being run. Probably because of all those conditionals you put in to make sure you were only sending one notification…
- Your logic all runs on the initial save of the record.
- Einstein makes a prediction based on the updated record.
- This prediction is written to the AI Prediction Field in a distinct update.
- Your logic is not run again when this subsequent update happens.
If a prediction falls in the woods…
By now, you’re comfortable with us not running your logic a second time. Good! But I can see you getting agitated. (PSA: You should really turn off your webcam.) What if I have logic that’s based on the prediction? When the PredictedAmount is larger than $1 million, reassign to “me”! When the PredictedAmount drops by over 10%, quickly reassign to “not me,” post a warning in Chatter, and @mention the boss! That kind of thing. These are valid use cases, for sure. If a prediction happens, and there’s no logic around to hear it, does it make a sound?
To support these use cases, we’ve created the AIPredictionEvent.
When Einstein makes a prediction, it also generously creates an AIPredictionEvent. This event contains the necessary information for you to create logic that runs on predicted values. It contains the name of the field that contains the prediction and, crucially, the ID of the record on which the prediction happened. You can mix and match all of these things to build both Apex logic and Process Builder logic.
The Apex programmers among you are probably already five steps ahead of me. You just write a trigger on the AIPredictionEvent, use all of the IDs given to you to retrieve the relevant records, and make magic happen. The targetID could be any type of object, but you can easily figure out if it’s the type you want using SOQL. Got it? Great! Okay, off you go.
(Are they gone yet? Okay, let’s proceed.) How do you mix and match all of these things in Process Builder? I’m glad you asked!
Process Builder and predictions
Wait — what did you just say? The event’s targetID could be any type of object? Yes, I did indeed say that. You might have one prediction that predicts PredictedAmount on Opportunity, one that predicts PredictedCategory on Case, and another that predicts IdealBirthdayGift on ProductManager__c. (Hint: POWER TOOLS. Maybe a table saw?) Apex makes quick work of this polymorphism using SOQL. (Polymorphism: One thing that can take many shapes. Meaning the target of the event could be any of these types. Long story.) Process Builder does not have SOQL. It does, however, have secret powers that we will use later in step 2 to tackle this problem.
Step 1: Create your process
Follow the normal path to create a Process Builder process. The key thing you need to do is select “A platform event message is received” for “The process starts when”.
Step 2: Add the right trigger
The first thing you do when creating any process is click the + Add Trigger node. It’s here that the wizardry will occur! The first selection you make is the AIPredictionEvent for your Platform Event.
The next selection is the Object. This is the first secret power I foreshadowed before! Once you select the type you are trying to write a process for (for example, Opportunity), Process Builder will let you point-and-click fields on that type later, when you make criteria and actions. Abracadabra.
Now comes the real secret power, the one that does what SOQL does in Apex: the matching conditions. The one and only rule you will make on the entry trigger will find the correct record for the process, and will tell the process only to run if that record is the correct type.
For Field, select the ID of the object itself. For example, “Opportunity ID” for processes on Opportunity, “Case ID” for processes on Case.
Under Type, select Event Reference, and under Value, select AI Predicted Object ID.
This rule says to match the event’s AI Prediction Object ID — the object on which the prediction was made — with an Opportunity ID in the system. Nice! Even better, because of this rule, the process will quietly do nothing when the prediction was made on the wrong type of object!
For example, if this process is for reassigning Opportunities to “me” when the PredictedAmount is over $1 million, and the prediction being evaluated is predicting that the IdealBirthdayGift for Product Manager Josh is a 42” TV (2160p, preferably), this rule is going to observe that the target (Product Manager Josh) is the wrong type (as in, not Opportunity). It will bail, without trying the process criteria nodes and without trying to fit Opportunity updates onto Product Manager records. Shazam.
Step 3: Use criteria to identify the predicted field
Prediction Builder is so powerful that it can predict multiple fields on a single object type. For example, you could predict both the Amount and the CloseDate on an Opportunity. For this reason, your rule should make sure that this particular AIPredictionEvent is for a prediction on the field you expect.
Click the + Add Criteria node to create a rule that will filter out predictions on other Opportunity fields. Under “Set Conditions”, use “Platform Event” as your source instead of Object. For field, select AI Predicted Field API Name. This tells the rule to evaluate which field this prediction was made for.
Now you will type in the Object and Field name in the “Value” column. For example, “Opportunity.PredictedAmount__c”. Any actions you add to the right of this criteria node will only fire if the prediction was made for the specified field.
Step 4: Do some real logic
Now that your process has filtered out the noise, you can start to make some real rules. You simply add conditions after the one created in the prior step to make your logic happen. You have access to all of the values on the parent record, including that of the predicted field.
To make the rule that assigns opportunities that are predicted to be over $1 million to “me”, add a second condition that checks that value.
Step 5: Let your creativity run free
The hard part is over! Let’s review what you’ve done.
- You’ve specified the object type, so the fields of that object are available to you in the proper places within Process Builder.
- You’ve checked the targetID on the event to make sure it’s one that matches the expected object type.
- You’ve added a filter on your rule to ensure that you’re looking at predictions on the right field.
- And now, you’re doing the important work of assigning all of the really good opportunities to your queue.
From here on out, it’s just Process Builder at its best. You can add actions, immediate or scheduled, that will fire off when the expected predictions arrive. You can create additional criteria to handle predictions on other fields, in this same process!
Want to use a formula? Let’s get creative! Sure, the Platform Event fields aren’t available to you in the formula editor. But does this stop us? Goodness, no. We can get crafty and trick the first node into returning false (AI Predicted Field API Name Not Equal To OpportunityPredictedAmount__c), and then use the second criteria node for a formula on the parent object fields.
You’re probably far better than I am with Process Builder. I’m just a product manager with a birthday on November 6. I will therefore stop here on how to complete your process, and leave it to your skill and creativity. You can take the logic as far as the tool allows. And, as always, when you reach that cliff, Apex will be there to trigger on these events and handle the rest of the use cases you can dream up.