ARTICLE AD BOX
I am training a recommendation model with LightFM on implicit purchase data.
In my dataset, each interaction represents a transaction: a user buys an item.
If a user buys the same item multiple times, I first aggregate the transactions by (user, item) and compute the purchase frequency.
My current approach for preparing interaction weights is:
For each (user, item) pair, compute the purchase frequency.
For each user, find the maximum purchase frequency across all items that user has bought.
Define the interaction weight as:
weight(user, item) = frequency(user, item) / max_frequency_of_any_item_for_that_user
So the weight is normalized per user and always falls in (0, 1].
For example, if a user has:
item A: 10 purchases
item B: 4 purchases
item C: 1 purchase
then the weights become:
item A: 10 / 10 = 1.0
item B: 4 / 10 = 0.4
item C: 1 / 10 = 0.1
My goal is top-K recommendation from purchase history.
My current LightFM settings are:
loss = warp
max_sampled = 20
no_components = 32
item_alpha = 0.00001
user_alpha = 0.0000001
What I would like advice on is:
Is this a reasonable way to prepare interaction weights for LightFM on purchase data?
Does normalizing by each user’s maximum item frequency remove too much useful signal?
Is there a better principle for preparing interaction weights for repeated purchases in implicit-feedback recommendation?
In practice, when using LightFM on purchase data, should interaction weights mainly reflect:
relative preference within a user,
or absolute purchase intensity?
I would appreciate advice from anyone who has worked with LightFM or implicit-feedback recommendation on transaction data.
