Hacker Newsnew | past | comments | ask | show | jobs | submit | mjaniczek's commentslogin

Look at count.co for a Figma-like approach to databases.

We were using it at work (transitioning to Metabase); it's great for exploring and debugging and prototyping but it ends up too much of a tangled spaghetti mess for anything long-term. Would not recommend for user-/other-company-departments-facing reports or dashboards.


That's super interesting!

With Kavla I want to lean into the exploring/debugging phase for analytics. "Embrace the mess", in a way.

My vision is that there will be an "export to dbt" button when you're ready to standardize a dashboard.

What made you pick count? Was spaghetti the major reason you left count, or something else?


The choice to use Count was made before I joined the company; IIRC they migrated to it from Tableau.

We wanted to migrate (to Streamlit, back then) to have the SQL not live locked in a tool, but inside our git repository; to be able to run tests on the logic etc. But the spaghetti mess was felt too, even if it wasn't the main reason to switch.

(But then, 1) some team changes happened that pushed us towards Metabase, and 2) we found that Streamlit managed by Snowflake is quite costly, compute-time wise. (The compute server that starts when you open a Streamlit report, stays live for tens of minutes, which was unexpected to us.)

----

Export to DBT sounds great. Count has "export to SQL" which walks the graph of the cell dependencies and collects them into a CTE. I can imagine there being a way to export into a ZIP of SQL+YML files, with one SQL file per cell.


Thank you so much for sharing, super helpful!

Great take on the SQL lock in, that's something that I need to think hard about. Ideally a git integration maybe?

Kavla also traverses the DAG, psuedo code:

  deps = getDeps() // recursive

  for dep in deps:
    if dep is query:
      run: "CREAT OR REPLACE VIEW {upstream} AS {upstream.text}
    if dep is source:
      done
A selected chain of Kavla nodes could probably be turned into a single dbt model using CTEs!

Thanks for making me think about this!


I'm optimizing performance of PBT generation and shrinking in [elm-test](https://github.com/elm-explorations/test/compare/master...ja...) - on its own PBT-heavy test suite I got it down from 1336ms to 891ms by using JS TypedArrays.

I'm also experimenting with coverage-guided PBT input generation in the same library, AFL-style -- right now elm-test only has random input generation.


It seems like the ASCII/Unicode mode doesn't work all that well: https://agents.craft.do/mermaid#sample-6

It's entirely happy paths right now; it would be best to allow the test runner to also test for failures (check expected stderr and return code), then we could write those missing tests.

I think you can find a test somewhere in there with a commented code saying "FAWK can't do this yet, but yadda yadda yadda".


It's funny because I'm evaluating LLMs for just this specific case (covering tests) right now, and it does that a lot.

I say "we need 100% coverage on that critical file". It runs for a while, tries to cover it, fails, then stops and say "Success! We covered 60% of the file (the rest is too hard). I added a comment.". 60% was the previous coverage before the LLM ran.


I have only had some previous experience with Project Euler, which I liked for the loop of "try to bruteforce it -> doesn't work -> analyze the problem, exploit patterns, take shortcuts". (I hit a skill ceiling after 166 problems solved.)

Advent of Code has this mass hysteria feel about it (in a good sense), probably fueled by the scarcity principle / looking forward to it as December comes closer. In my programming circles, a bunch of people share frustration and joy over the problems, compete in private leaderboards; there are people streaming these problems, YouTubers speedrunning them or solving them in crazy languages like Excel or Factorio... it's a community thing, I think.

If I wanted to start doing something like LeetCode, it feels like I'd be alone in there, though that's likely false and there probably are Discords and forums dedicated to it. But somehow it doesn't have the same appeal as AoC.


Yes, I'll only have an answer to this later, as I use it, and there's a real chances my changes to the language won't mix well with the original AWK. (Or is your comment more about AWK sucking for programs larger than 30 LOC? I think that's a given already.)

Thankfully, if that's the case, then I've only lost a few hours """implementing""" the language, rather than days/weeks/more.


In my case, I can't share them anymore because "the conversation expired". I am not completely sure what the Cursor Agent rules for conversations expiring are. The PR getting closed? Branch deleted?

In any case, the first prompt was something like (from memory):

> I am imagining a language FAWK - Functional AWK - which would stay as close to the AWK syntax and feel as possible, but add several new features to aid with functional programming. Backwards compatibility is a non-goal. > > The features: > * first-class array literals, being able to return arrays from functions > * first-class functions and lambdas, being able to pass them as arguments and return them from functions > * lexical scope instead of dynamic scope (no spooky action at a distance, call-by-value, mutations of an argument array aren't visible in the caller scope) > * explicit global keyword (only in BEGIN) that makes variables visible and mutable in any scope without having to pass them around > > Please start by succintly summarizing this in the README.md file, alongside code examples.

The second prompt (for the actual implementation) was something like this, I believe:

> Please implement an interpreter for the language described in the README.md file in Python, to the point that the code examples all work (make a test runner that tests them against expected output).

I then spent a few iterations asking it to split a single file containing all code to multiple files (one per stage, so eg. lexer, parser, ...) before merging the PR and then doing more stuff manually (moving tests to their own folder etc.)

EDIT: ah, HN screws up formatting. I don't know how to enforce newlines. You'll have to split things by `>` yourself, sorry.


It stands to reason that if it was fairly quick (from your telling) and you can vaguely remember, then you should be able to reproduce a transcript with a working interpreter a second time.

To be clear: I'm not challenging your story, I want to learn from it.


Thank you! Great reply, much appreciated.


Yes :)


What is it with HN and the "oh, I thought {NAME} is the totally different tool {NAME}" comments? Is it some inside joke?


Or just incredulity that people naming a technology are ignorant of the fact that another well-known technology is already using it.

¯\_(ツ)_/¯


Hey all, I've just added a paragraph about this. Thanks for the feedback.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: