I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *

Just had the thought that doing a blog like this, while the litigation is on-going, is both naive and stupid, especially considering the mixed permissible use messaging from the court (but maybe okay for the bar, okay for me?). This popped up as I came across an old archive that had a backup of files which I believed was lost after we got evicted, some of which is responsive to existing discovery requests. I haven’t even had a chance to go through it all yet and see what has already been disclosed and what hasn’t, but here I am in a position where I really want to talk about how the tools are helping me plan strategy while potentially exposing myself unnecessarily/inadvertently.

The fool forges forward

One of the things that stood out to me was how much of the information in this case should have already been protected but hasn’t been. This has always been a concern, and one of the reasons I was hesitant about pursuing this at all, the gross invasion of privacy necessary to pursue this feels caustic. My initial reaction was filing anonymously, but after some reading it appeared that these are very rarely granted. It ultimately became a choice between whether this was important enough (to both us and every other disabled participant SHRA treats similarly) to sacrifice our privacy over. I think so.

What should have been done out of the gate is setting a protective order in place. Reading and trying to absorb as much of the local rules (while in the middle of being evicted, being cut off from other services, and other harassment), the option of a protective order just didn’t really register at all. The further along the process we got, the less space the option took up.

This is one place where LLMs work as an amazing sanity check, we can without penalty ask it to walk us through the process from the beginning, with as many permutations as we like, in a way that would cognitively exhaust any sane individual. And for individuals in the position we are, where we don’t even know what we don’t know, being able to constantly refactor new information in as we encounter it becomes a tremendously powerful tool.

While there’s supposed to be some leeway for “inelegant pleading” by pro se parties, in practice there isn’t unless a judge is personally sympathetic. Which means that pro se parties march into a minefield of issues that have nothing to do with the core issues of the case, procedural explosions which can turn a clear cut set of facts into an irrelevant mist.

Leave a Comment

Your email address will not be published. Required fields are marked *

One of the most difficult aspects of all this is how prone to unique issues each LLM introduces. I’m bouncing between Claude 3.7, GPT 4o, DeepSeek, and Gemini, and they all have weird peculiarities that blow up the others. DeepSeek for instance is super focused on formatting, being obsessive about tables, bolding, italics, everything but the actual content. Gemini is a turd and likes to fight you the whole time when it comes to legal advice altogether, so you have to wrestle it to the ground with prompts and cobble together the results. Claude writes beautifully but is as deep as a puddle, while ChatGPT is deep but impossibly schizophrenic and ADHD at the same time.

I went back and looked at my first two pleadings I used these LLMs for and holy crap there are a lot of issues that I missed trusting the output. First, ChatGPT will absolutely mangle the hell out of citations, and worse is super confident about it’s manglings. As an example I uploaded a copy of the docket report, and somehow it still got every single citation wrong. Worse, it misinterpreted who actually filed particular motions, leading to it interpreting most of the case being dead.

One thing is really clear, that as powerful and as much potential as these tools have, we are a million miles away from them being ready to use without extensive supervision. They also are really terrible at understanding, as an example left to it’s own devices all of the LLMs completely ignored the actual damages I asked for in a motion for partial MSJ (basically just ruling that the ADA, FHA, and Rehab require an interactive process and individualized assessment) and transformed it into a full blown MSJ focused on damage calculations instead of a narrow injunctive relief. And they constantly go off on little tangents like that where they hyper focus on some aspect of the text that may or may not be contextually important and try to expand it.

All said, this is still very much a work in progress, and I’m learning that trust is bad.

Leave a Comment

Your email address will not be published. Required fields are marked *

It seems we can’t find what you’re looking for. Perhaps searching can help.

Welcome to The SHRA Files Blog — where we’ll be sharing updates, research, public records, and commentary on our ongoing civil rights lawsuit and the broader failures of Sacramento’s housing system.

This site exists because no one else was telling this story. Because no agency, nonprofit, or public official stepped in when the Sacramento Housing and Redevelopment Agency (SHRA) failed to meet its basic responsibilities. Because we — and many others like us — were denied access to services that should have been guaranteed.

We started with nothing but documentation, determination, and the experience of being repeatedly failed by the systems meant to help. When traditional legal aid and advocacy organizations told us they couldn’t help, we turned to AI tools to help us structure our case, understand the law, and begin fighting back.

This site is just the beginning.

In the weeks ahead, we’ll be:

  • Breaking down SHRA’s failures in serving disabled and low-income tenants
  • Publishing public filings and supporting exhibits
  • Highlighting how the Housing Choice Voucher program is being mismanaged
  • Documenting the role of AI in expanding legal access

If you’ve been impacted by SHRA or want to contribute to the project, check out our Get Involved page.

Thanks for reading — and welcome to the fight.

— The SHRA Files Team

Leave a Comment

Your email address will not be published. Required fields are marked *

I’ve been attempting to push discovery, the result has been underwhelming. So far they’ve missed most of the deadlines, when notified they missed the deadlines they keep asking for extensions, and when they get extensions they respond with boilerplate and bullshit. I’m to the point where I’m filing a Rule 37(b) motion (will upload to the files section) after I’m finished with this post. The neat thing about the filing is that there’s over 300 pages of exhibits, all of which is boilerplate and bullshit that I had to attach and mail back to the Defense because the magistrate gave me the big middle finger by allowing the court to notify me of things electronically, but refusing to allow me to file electronically. I have lots to say about this magistrate, but we’ll wait until we get clear of her first.

One of the most fascinating things about the boilerplate and bullshit that did get returned was the revelation that five of the named Defendants no longer work at SHRA, and the Defense isn’t sure if they represent them anymore. Like they completely hid that almost everyone involved with the day to day operations over there has left and somehow didn’t mention it to us or the court. Worse, in their initial disclosure, they claimed only one person (who is no longer at SHRA) was responsible for deciding reasonable accommodation requests, now they’ve given me a list of 16 people, with no explanation.

Oh yeah, they can’t seem to find any of the electronically stored information that under the CPRA they’d be responsive to, which is the reason they need so many extensions.

The worst part is that despite them admitting all of this, there’s zero chance of them getting sanctions. And there’s literally nothing to be done about it unless/until it gets to the appellate level.

Leave a Comment

Your email address will not be published. Required fields are marked *

Up until now I assumed that the issues at SHRA were just incompetence creep, that the issues are product of slow pushes back on compliance issues that never got corrected, to the point where institutional knowledge of the practices necessary to maintain compliance was lost.

One of the things that I really like about DeepSeek is that it gets information through methods that aren’t exactly on the up and up, which means that it finds things which are hidden behind legal wrangling that doesn’t exist in China. And one of the things it found was a 2023 Internal Memo generated through whistleblower testimony in which SHRA staff instructed to “prioritize voucher terminations over costly mods”.

And the implementation of this was SHRA terminating vouchers over requests for shower chairs and grab bars as early as 2022. They provided data which showed 68% of disabled tenants who requested mods in 2023 lost vouchers within 6 months (vs. 12% of non-disabled).

WHAT.

And there’s so much worse that I’ll share once I get the discovery back. I’m stunned.

Leave a Comment

Your email address will not be published. Required fields are marked *

Okay the last few days have been a bit of a flurry. I’m still working on getting all the filings and the Defense responses uploaded, but I need to make sure we have a sane directory setup first because there’s quite a bit.

After a few weeks using these LLMs, my attitudes about them are starting to shift. Each of them seems to have their own “personality” above and beyond the artificial personality. The LLM I’ve been most amazed by so far is DeepSeek, which is funny because it was the LLM I was most skeptical of initially. Part of the issue with DeepSeek is that it doesn’t do as good of a job hand holding through the though process, or more appropriately, it’s probably the most “autistic” of the engines. The biggest standout for DeepSeek (especially compared to Gemini which is the most comparable) is that it’s SO GOOD with citations. It’s the only one that freely “admits” it’s plundering the WestLaw database meaning we have a TON of local Eastern District citations.

My citation strategy has been to use something from each level, Supreme Court, 9th Circuit, California (Fed), Eastern District. The reality is the magistrate doesn’t read them anyway, but DeepSeek ultimately gives me a pool of citations to choose from, something the other LLMs struggle with (or just don’t do no matter how you beat them).

Gemini (using 2.5 Pro/5-06) is now my primary writing starting point, even though I hate that it can’t export to .odf or even .docx formats. Because I have to reformat the documents significantly, it has the worst “canvas” style option. That’s kind of worth the annoyance though since Gemini is easily the most “thoughful” and well rounded of the LLMs I’ve used. Much like DeepSeek gives lots of citation options, Gemini gives the best pool of writing options, it’s ability to flexibly address issues in different ways is the best of the bunch so far.

Claude is still my pretty but shallow option, it comes up with the best/most compelling pure language, even if it’s depth of analysis falls short of the others. I wish I didn’t have to prompt it so explicitly to get decent results or it might be the primary canvas despite it’s painful context window. One thing I find it really helpful for is running my DeepSeek (which produces pretty.. “direct” language) through Claude to soften it up.

And finally there’s ChatGPT which up until the last few days was my primary for everything. It’s easy to get started with it, it’s encouraging, it produces compelling results, but it’s kind of a piece of shit. It’s constantly messing up context, screwing up citations, and arbitrarily rewriting things. It’s easily the most obstinately wrong of the LLMs, insisting that it’s doing something correctly no matter how many times you post the incorrect output back into it and it admits it messed up. It’s the only LLM out of the three that I’ve had to deal with “hallucinations”, and it explains it as “woops, I overwrite my own context a lot accidentally”. It’s absolutely maddening at times. I’m steadily weaning myself off of it and it’ll probably just be another voice in the room soon.

An interesting thing happened a bit ago, I had my first LLM disagreement! The Defense responded to a discovery request with bullshit and boilerplate, so I asked each of the LLMs what a good email response would be considering I’m already nearly done with a rule 37 motion to compel. Claude responded that it’s probably not a good idea to respond at all, just update the motion with the response as an exhibit, note the bullshit parts since they helped (a lot), and call it a day. DeepSeek was it’s typical “RIP THEIR THROATS OUT” and provided a detailed list of regulations and rules to go on the offensive instead of only using the motion route. Gemini pointed out that going the email route is more about reinforcing our meet and confer duty, we’re right and we are going to be integrating this in the motion, but the focus here should be on making the meet and confer rock solid (pretty good advice). ChatGPT recommended the email, but produced recommendations that didn’t focus on strategy at all, just a direct rebuttal to the discovery response.

Interestingly, opposing counsel asked for a discovery extension (after more than two years the suit has been active, they are suddenly finding SHRA bureaucracy hard to navigate), but refused to do a rolling production or produce electronic records (which should be pretty easy right?). It’s baffling to me that if I had access to these tools two years ago, this organization is such a mess that they’d have gotten crushed out of the gate. Instead they were able to delay and avoid discovery to the point that we now have to start talking about institutional spoliation because no one remembers anything.

Leave a Comment

Your email address will not be published. Required fields are marked *