18 real problems catalogued ยท Free forever ยท No business plans, just the itch

Home/Tech/Builders who share projects made with AI assistance face public backlash with no clear standard for what disclosure is actually expected
All problems

Builders who share projects made with AI assistance face public backlash with no clear standard for what disclosure is actually expected

Ship something built with AI tools and watch the comments. A significant and vocal portion of the internet treats AI assistance as dishonesty, cheating, or devaluation of the work, regardless of whether the human contribution was substantial. The norms for what counts as acceptable AI use do not exist yet and builders are paying the social cost of that ambiguity.

Added April 28, 2026
Share
77%
Of developers now use AI coding assistants as part of their workflow
63%
Of creators who use AI tools say they avoid publicly disclosing AI use due to anticipated negative reaction
3x
Higher negative engagement rate on posts disclosing AI use compared to equivalent posts without disclosure
The Problem

The double bind that has no clean exit

You built something. You used AI tools to help you build it. You want to share it with a community that might find it useful or interesting. You now face a choice with no good option.

If you disclose the AI assistance, a vocal portion of the audience will question whether the work is really yours, whether it demonstrates real skill, or whether you are contributing something of value or just remixing AI output. The criticism will often be disconnected from whether the thing you built is actually useful.

If you do not disclose and the AI assistance is identified, the criticism is worse because it now includes accusations of deception. The same thing that would have attracted criticism as an ethical choice becomes evidence of dishonesty as an omission.

The cultural norms for what AI assistance means, whether it reduces the value of work, what disclosure is expected, and what standards the community applies, are being formed in real time and the people sharing work right now are paying the social cost of that ambiguity before any consensus exists.

Why the reaction is so strong

The hostility to AI-assisted work is not entirely irrational. It reflects real concerns about several things simultaneously. The devaluation of skills that took years to develop, when those skills can now be partly replicated in minutes. The uncertainty about what expertise means when tools can approximate it. The economic consequences for professionals whose income depends on skills that AI is reducing the scarcity of. And a genuine philosophical disagreement about what constitutes authentic creative or technical contribution.

These are legitimate concerns. The problem is that they are being applied as a blunt instrument to anyone who uses AI tools for any purpose, regardless of whether their use is substantial or minimal, regardless of whether their contribution is significant or token. The absence of nuanced community standards means that a developer who used GitHub Copilot to autocomplete variable names faces the same criticism as someone who generated an entire codebase without understanding any of it.

What the data shows about actual AI adoption

The Stack Overflow Developer Survey found that 77 percent of developers now use AI coding assistants as part of their workflow. That is not a fringe behaviour. It is the majority of working developers. The Adobe Future of Creativity Study found that 63 percent of creators who use AI tools avoid disclosing it publicly due to anticipated negative reaction.

The gap between the actual adoption rate and the disclosed adoption rate tells you what the cultural moment looks like from the inside. The majority of builders are using AI tools. The majority of those builders are not disclosing it publicly. The people who do disclose often face criticism. The people who do not disclose and are found out face worse criticism. The system currently selects for non-disclosure and then punishes it, which is a reliable way to produce exactly the environment of distrust it claims to be opposing.

Proof Signals
๐Ÿ—ฃ๏ธ
Twitter and X โ€” Multiple high-profile incidents in 2024 and 2025 where builders disclosed AI assistance and faced significant backlash. The pattern is consistent: disclosure triggers criticism about authenticity, skill, and value. Non-disclosure, when discovered later, triggers even harsher criticism about deception. There is currently no safe path that avoids criticism.
๐Ÿ—ฃ๏ธ
Product Hunt โ€” Product Hunt launches featuring AI-built products regularly generate comment threads debating the legitimacy of AI assistance. The debate is not about product quality or usefulness. It is about the morality and authenticity of using AI tools, which is a separate question that often drowns out evaluation of the actual product.
๐Ÿ—ฃ๏ธ
Hacker News โ€” Show HN posts featuring projects built with AI assistance receive qualitatively different comment threads than equivalent posts without AI disclosure. The discussion often shifts from the technical merit of the project to a debate about AI, which the builder did not ask for and cannot meaningfully control.
๐Ÿ—ฃ๏ธ
Reddit developer communities โ€” r/ProgrammerHumor, r/cscareerquestions, and r/webdev all contain threads debating the legitimacy of AI-assisted development. The sentiment ranges from complete acceptance to outright rejection, confirming that no community standard has emerged.
๐Ÿ—ฃ๏ธ
LinkedIn โ€” Builders who share AI-assisted project announcements on LinkedIn face a split reaction. Professional context creates more acceptance than consumer contexts but significant criticism still appears and can affect professional reputation when posts reach large audiences.
Who Has This Problem

The Indie Builder

Built a useful tool using AI coding assistance and wants to share it with the community that might benefit from it. Knows that disclosing AI use will attract criticism. Knows that not disclosing and having it discovered will attract worse criticism. Is navigating an impossible double bind with their reputation as the stakes.

The Designer

Used AI image generation or AI writing tools as part of a creative project. The cultural norm in creative communities around AI is more hostile than in developer communities. Faces the same disclosure dilemma with higher emotional stakes because creative identity is more personally tied to process than engineering identity often is.

The Agency or Freelancer

Uses AI tools to work faster and deliver more value to clients. The question of whether and how to disclose AI use in client work involves both ethical obligations and competitive considerations. Clients who perceive AI use as reducing the value of the work may reduce compensation expectations accordingly.

The Student or Career Changer

Used AI assistance to build a portfolio project intended to demonstrate competence to potential employers. The project is real and functional but the question of whether AI assistance means the portfolio does not demonstrate what it claims to demonstrate is unresolved in hiring norms.

Why Nothing Works

Voluntary disclosure

The current approach for most builders is personal judgment about whether and how to disclose. This produces inconsistent disclosure, which creates the conditions for accusations of deception when AI use is identified that was not disclosed. The absence of a standard means everyone is making different decisions and critics can always find a basis for criticism.

Platform disclosure features

Some platforms have added voluntary AI disclosure labels. These are inconsistently used, weakly enforced, and do not carry meaningful consequences for non-disclosure, which means the disclosure signal has little value because it is not reliably present even when AI was used.

Community norms

Different communities have developed different norms around AI disclosure and these norms are not compatible with each other. A builder navigating multiple communities faces conflicting expectations simultaneously with no guidance on which standard to apply.

Defensive framing

Builders who anticipate criticism sometimes pre-emptively address AI use in their posts with explanations of how they used it and what their contribution was. This framing reduces but does not eliminate criticism and requires significant additional communication effort for every post.

Not disclosing at all

The most common choice and the one that carries the highest risk. When AI use is identified by critics who were not told about it upfront, the criticism is more severe because it involves accusations of deception rather than just disagreement about whether AI assistance is legitimate.

Go Research This Yourself
  • ๐Ÿ”
    Twitter and Xsearch: "AI built app backlash disclosed vibe coding criticism 2025 2026"

    Filter to recent posts. Look for specific incidents where builders disclosed AI use and faced organised criticism. The specific incidents are more useful than aggregate sentiment data.

  • ๐Ÿ”
    Product Hunt search: "AI built tool comment section disclosure"

    Browse recent launches that mention AI in their description and read the comment sections. The variation in reactions to similar disclosures shows how unsettled the norms are.

  • ๐Ÿ”
    Hacker News search: "AI assisted coding vibe coding legitimacy"

    Search Hacker News for AI coding and disclosure discussions. The comments are detailed and technical and represent a cross-section of experienced developer opinion.

  • ๐Ÿ”
    Stack Overflow Developer Survey search: "AI tools usage disclosure developer survey"

    The annual survey includes detailed data on AI tool adoption rates and developer sentiment about AI assistance. Free to access and regularly cited.

  • ๐Ÿ”
    Reddit search: "AI portfolio project legitimate hiring employer views"

    r/cscareerquestions, r/webdev, r/ProgrammerHumor. Look for threads debating whether AI-assisted projects count for hiring purposes. Employer perspectives in these threads are particularly valuable.

Questions Worth Asking
  • 1.Is there a standard framework for AI disclosure analogous to creative commons licencing that could become widely adopted and actually create clarity?
  • 2.Does the backlash reflect a genuine concern about skill and authenticity that will persist as AI tools become universal, or is it a transitional moment that will resolve as norms stabilise?
  • 3.Could a certification or verification system that validates the human contribution to an AI-assisted project create a trusted signal for hiring and community evaluation?
  • 4.Is the opportunity in the community and norm-setting space rather than in a traditional product, and if so, what does a business model look like for a norm-setting platform?
  • 5.How do analogous historical transitions, photography versus painting, digital music production versus live performance, resolve the authenticity question over time and what can builders learn from that trajectory?
โš ๏ธ gotaprob surfaces problems worth investigating โ€” not businesses ready to build. We don't validate ideas or guarantee opportunity. This is a starting point. Do your own research.

Stay curious

New problems, every week

A short digest of real problems worth exploring. No spam, no business plans โ€” just the raw itch.