![](https://crypto4nerd.com/wp-content/uploads/2023/07/1yTfmyh2LvJwPL-OTVig8Rw.jpeg)
[Book] Buddhism Plain and Simple: Suspend quick judgments of good/bad, pleasant/hurtful, just be present and observe first: “It’s called enlightenment. It’s nothing more or less than seeing things as they are rather than as we wish or believe them to be.”
[Book] Mistakes were made (50% complete). Everyone self-justifies their actions. Reduce this effect on the self. Extend it to other people (especially your partner): “Successful partners extend to each other the same self-forgiving ways of thinking we extend to ourselves: They forgive each other’s missteps as being due to the situation but give each other credit for the thoughtful and loving things they do. If one partner does something thoughtless or is in a crabby mood, the other tends to write it off as a result of events that aren’t the partner’s fault:”
- I only read this book halfway so far, but I find the idea of [extend the same self-justification habit you apply on the self to others] to be more palatable then [apply the most generous interpretation to others’ actions]. The former sounds like “be fair”, whereas the latter sounds like “exert more effort than other people”.
Haskell. Skimming through Real World Haskell. One intution: Monads = wrappers on top of raw values that support (1) wrapping the values and (2) performing raw-transformations on the raw values inside the wrapper — not unwrapping the wrapper, because sometimes the wrapper is not unwrappable.
Python. Use snakeviz to performance profile. Dataclass is kinda like Java’s AutoValue. Use poetry instead of venv.
Simple rule-based model vs ML. https://blog.acolyer.org/2019/10/28/interpretable-models/. Get over the stigma that “simple, non-ML models” are “bad”. Sometimes it’s actually best for the practical situtation to end up w/ simple if-else rules for your final production model. Non-interpretability comes at a huge cost in some settings. A quote: “If tasks are structured enough, and there are many models that perform well in the task, surely there is at least one model that is also interpretable in that space”.
Differential privacy
- Tuning epsilon: Take into account scale. Instead of using epsilon, you can instead tune scale-adjusted epsilon = epsilon * n, where n is the dataset size. By tuning this ratio instead, you automatically get effects like: If n is large, then you can use smaller epsilon. If you do some more math, on linear queries, this is actually equivalent to adjusting the relative error * looseness of column bound for SUM queries.
- User-level privacy: naive impl, improving true sensitivity. Naive implementation of just replacing each table’s base sensitivity multiplicity-fold introduces too much noise. Consider (1) non-record-linear queries, like COUNT(distinct(user)) — the user-level sensitivity is 1: you don’t have to multiply by K. (2) join — don’t doubly multiply the user’s multiplicity. When your implementation already matches the true sensitivity, then the only way to reduce the noise scale is to reduce the true sensitivity itself by mangling the data. E.g. google dp’s per-user contribution clamps + number of groups contributed truncation.