Compare commits

...

9 Commits

Author SHA1 Message Date
Kathleen Fitzpatrick
2047b6fab1 add IAU 2025-10-27 08:53:39 -04:00
Kathleen Fitzpatrick
18eb6555d0 rebuild for webmentions 2025-09-08 06:25:39 -04:00
Kathleen Fitzpatrick
ba107935be add learning 2025-09-07 13:56:52 -04:00
Kathleen Fitzpatrick
243831864d add success 2025-08-31 14:44:52 -04:00
Kathleen Fitzpatrick
88fb7d434f rebuild for webmentions 2025-08-31 14:28:20 -04:00
Kathleen Fitzpatrick
599779e48a fix merge? 2025-08-31 13:40:25 -04:00
Kathleen Fitzpatrick
6b92aa6e54 adding longevity 2025-08-18 13:23:23 -04:00
Kathleen Fitzpatrick
83bcc2d6d2 fix typo 2025-08-10 13:02:16 -04:00
Kathleen Fitzpatrick
9829b55899 new post 250810 2025-08-10 08:25:48 -04:00
16 changed files with 1966 additions and 22 deletions

File diff suppressed because one or more lines are too long

View File

@@ -13,15 +13,30 @@
"state": {
"type": "markdown",
"state": {
"file": "blog/2025-06-26-distinguished.md",
"file": "blog/2025-09-04-learning.md",
"mode": "source",
"source": false
},
"icon": "lucide-file",
"title": "2025-06-26-distinguished"
"title": "2025-09-04-learning"
}
},
{
"id": "5b83e2a3f24371eb",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "blog/2025-10-27-iau.md",
"mode": "source",
"source": false
},
"icon": "lucide-file",
"title": "2025-10-27-iau"
}
}
]
],
"currentTab": 1
}
],
"direction": "vertical"
@@ -155,6 +170,7 @@
},
"left-ribbon": {
"hiddenItems": {
"bases:Create new base": false,
"switcher:Open quick switcher": false,
"graph:Open graph view": false,
"canvas:Create new canvas": false,
@@ -164,34 +180,43 @@
"vscode-editor:Create Code File": false
}
},
"active": "8023859adaab86ea",
"active": "5b83e2a3f24371eb",
"lastOpenFiles": [
"index.njk",
"blog/2025-06-26-distinguished.md",
"blog/2025-05-30-all-this.md",
"blog/2025-05-11-networking.md",
"blog/img/Slide9.jpeg",
"blog/img/Slide8.jpeg",
"blog/img/Slide7.jpeg",
"blog/img/Slide6.jpeg",
"blog/img/Slide5.jpeg",
"blog/img/Slide4.jpeg",
"blog/img/Slide3.jpeg",
"blog/img/Slide2.jpeg",
"blog/img/Slide1.jpeg",
"blog/2025-09-04-learning.md",
"blog/2025-10-27-iau.md",
"blog/2024-11-29-posse.md",
"blog/2025-04-20-gitea.md",
"blog/2024-11-29-rebuild.md",
"blog/2023-07-03-eleventy.md",
"blog/2024-02-18-syndication.md",
"blog/2024-05-07-happening.md",
"blog/2025-01-31-placeholder.md",
"blog/2025-02-18-equality.md",
"blog/2025-04-17-path.md",
"blog/2025-03-18-writing.md",
"blog/2025-02-18-equality.md",
"blog/2025-05-11-networking.md",
"blog/2025-08-31-success.md",
"blog/2025-08-09-networking-cont.md",
"blog/2025-08-18-longevity.md",
"networking.md",
"blog/2025-06-26-distinguished.md",
"index.njk",
"blog/2025-05-30-all-this.md",
"blog/2024-03-01-reading.md",
"blog/2024-06-30-reading.md",
"blog/2023-06-22-recalibrating-again.md",
"blog/2024-07-20-new-jobs.md",
"blog/2025-02-18-audiobook.md",
"blog/2025-02-17-independence.md",
"blog/2025-01-31-placeholder.md",
"blog/2024-12-22-finite.md",
"blog/2024-12-21-rest.md",
"blog/2024-12-20-storage.md",
"blog/2024-12-14-distraction.md",
"blog/2024-11-29-posse.md",
"blog/2024-11-30-defeat.md",
"blog/2023-11-30-lecture.md",
"blog/2023-12-26-smart-notes.md",
"blog/2023-12-29-concerns.md",
"blog/2023-12-29-value.md",
"blog/2023-12-31-governance.md",
"blog.njk",
"blog/img/msu-paths.jpg",
"blog/img/weekend.png"

View File

@@ -0,0 +1,34 @@
---
title: Networking Continued
date: 2025-08-09T16:38:09-04:00
permalink: /networking-continued/
tags:
- tinkering
---
As you may recall, I've been experimenting with setting up a home server, and several months ago had gotten stuck on an issue related to [the structure of my network](https://kfitz.info/networking/). [Taylor hopped in](https://kfitz.info/networking/?ht-comment-id=26755687) and really helped me understand how everything *ought* to work.
But it's not working. And I'm again flummoxed.
Here's the setup:
1. I have my ISP's modem/router/gateway monstrosity (the BGW320) running in IP Passthrough mode, with the WAN IP address being passed to my gateway Eero.
2. I have my Eeros set to Automatic DHCP mode; the gateway Eero is successfully getting the WAN IP address and is handing out private IP addresses in the 192.168.4.X range.
3. I have a registered domain name (let's say `example.net`), and I have an A record at my DNS service pointing to my WAN IP address. I have also created a subdomain A record (`service`) pointing to the same IP address. DNS Checker gives me all green checks for both.
4. I have a mini server, running Proxmox.
5. I have installed Nginx Proxy Manager in a container on the Proxmox (an LXC), which is running and reachable at the static address 192.168.4.11.
6. I have installed the service I'm trying to expose in another LXC, which is running and reachable at the static address 192.168.4.12.
7. I have set up port forwarding on my Eero network for ports 80 and 443 to ~~198~~192.168.4.11.
8. I have created a proxy host in NPM, for which all the dots are green:
- Domain Name: service.example.net
- Scheme: http
- Forward Hostname/IP: 192.168.4.12
- Forward Port: `port`
- Block Common Exploits and Websockets Support on
- Access List: Publicly Accessible
But `http://service.example.net:port` refuses to connect, as does `http://example.net`, either from my local network or through my VPN. And `traceroute` to either `example.net` or `service.example.net` stalls out.
I've checked the Proxmox firewall and inbound 80 and 443 are both set to accept. I've checked to see whether my ISP's montrosity's firewall could be blocking those ports but... who's to say. The NAT/Gaming (sigh) panel of the admin interface isn't showing the gateway Eero as a device that could need anything in particular sent its way, so my assumption is that IP Passthrough passes inbound requests through for the Eero to sort out, too.
I've searched around, and the nearest thing I've found to what I'm trying to do and how I'm trying to do it is in [this Reddit thread](https://www.reddit.com/r/Proxmox/comments/u857x5/nginx_proxy_manager_setup_troubles/), but the problem in that case is back at the beginning with the A record, which is definitely not my issue, unless I spelled my domain name wrong at the DNS. (I didn't.) And that person was able to get to the NPM congratulations page; my connections get refused entirely.
If anybody sees anything that I should adjust, or take a look at adjusting, I'd be grateful to hear. I'm already *this* close to dumping my ISP anyhow due to some ongoing service issues, and getting rid of their annoying modem/router/gateway would be a bonus, but I'm not entirely certain that it's the problem, and I'd love to find a way through without taking that step.

View File

@@ -0,0 +1,22 @@
---
title: Longevity and Sustainability
date: 2025-08-18T11:45:25-04:00
permalink: /longevity/
tags:
- thinking
---
I've been puzzling a bit of late about the relationship between sustainability planning for independent, nonprofit digital projects and the need to provide evidence of that sustainability even as it's being developed. The question has been pitched to me recently as being about *longevity*: can your project promise potential supporters that it will survive the next ten years?
It's a valid question, especially when the project is one that is in some sense *about* longevity, about (for instance) preserving the products of knowledge creation for the future. But it's a hard one to answer in the best of times, and goodness knows that we are not currently living through the best of times.
How much have the ways that we think about longevity and sustainability been conditioned by our experiences of working with software and platforms that, even when provided without charge, are operated by massive corporations with resources to burn? These companies can afford to move quickly, to respond to rapid growth, to develop robust user support, and to add new features with the kind of agility that very few small nonprofit or community-based groups can muster.
This is not to say that nonprofit projects should operate freed from any expectations for professionalism, including long-term planning, technical durability and security, attention to user needs, and so on; these are crucial considerations for any piece of infrastructure. But I worry that some of the metrics that we use in thinking about sustainability wind up privileging corporate solutions even when we're seeking values-aligned, non-extractive alternatives.
It will not shock anyone that I'm mostly thinking about my own project in this context.[^1] That project has been around for more than ten years, and has over that time demonstrated slow, sustainable growth, but it has been dependent on grant-based, project-oriented funding to support its work. We are now trying to break away from that model and put in place a mature revenue generation model that will allow us to recoup operating costs (and with luck to produce a small margin to support future needs) through membership fees paid by organizations and institutions that want to use our platform. As part of their membership, they get a voice in our governance processes, and thus have the ability to shape the project's future.
But for very understandable reasons, we're hearing questions about the potential longevity of the project, as folks with decision-making responsibility want to be sure that their investment will be to a good end, and that the work they subsequently entrust to the platform will be available over the long term. It's a Catch-22, though, in that *without* their investment (and the investment of other institutions like theirs) we absolutely will not survive -- so how can I say that our model will have succeeded before the future anterior becomes simple past?
At root: can we shift our thinking so that an investment in a non-extractive alternative is understood to be an investment in the community itself, *of which the investor forms a part*, in a way that doesn't ask small projects just getting underway to demonstrate all of the durability and agility of corporate alternatives? Can we begin to recognize that some aspects of the durability and agility we've been conditioned to demand have been produced precisely through an extractive economic model that is continuing to impoverish the very commons that we're trying to build? How can we turn the question about the project's longevity into a question about mutual commitment to a shared endeavor?
[^1]: Though I'm posting this in my own personal pondering space rather than over there because I'm hoping that respondents will think with *me* about these issues rather than immediately associate them with the project, even though such an association is all but inevitable.

View File

@@ -0,0 +1,11 @@
---
permalink: /success-at-last/
date: 2025-08-31T14:34:18-04:00
title: Success, at Last
tag: tinkering
---
After a [whole](/networking/) [lot](/networking-continued/) of tinkering, I think I have at last managed to get my home server up and running the way I want. Doing so required a change of ISP, which I wanted to do anyway as I'm getting a much better deal (including double the network speed) from my new provider. It also required a day and a half of further frustration, as the port forwarding setup that ought to have worked wasn't working at all, but after further futzing I've managed to get it all working pretty slickly.
In my current setup, I have Nginx Proxy Manager running in a container on my Proxmox, with a DNS entry set up pointing my IP address to it. Then I have a proxy host pointing to another container in which I'm running Gitea, and I'm successfully pushing and pulling code for this site to and from it.
Next up is setting the actual hosting of this site and a few others that I've been wanting to pull in house. It's nice to see the end of the network architecture phase of this project near and to have the creative work of writing and building opening up in front of me at last!

View File

@@ -0,0 +1,30 @@
---
title: Learning
date: 2025-09-07T13:28:41-04:00
permalink: /learning/
tags:
- tinkering
---
Over the last several months, I've been engaged in a project designed to bring a bunch of the stuff I'm hosting in various places around the internet home. And I mean "home" quite literally: I not only wanted to control the data I was putting out into the world, and the software I was using to do it, but also the metal on which it's hosted. I wanted my stuff on my server in my very own house.
Why? I can't fully articulate the drive. Some of it stems from a long-standing desire to "[de-google](https://en.wikipedia.org/wiki/DeGoogle)," to [quit Twitter](https://www.theverge.com/24293448/x-twitter-musk-deactivate-how-to), and to focus my creative energy on formats and platforms that I can trust and over which I can exercise some level of control. But that drive got exacerbated by everything that's happened around us since January and the creeping sense that even good actors in today's technology landscape could wind up being attacked, or even weaponized. And so the question started nagging at me a bit: what would it be to *really* self-host? What would be required, and what would I need to learn?
I want to acknowledge the very clear ways in which the privileges of my education, my social position, and my income allow me to take a project like this on just because I feel like it. I have the disposable income to invest in a small home server and other equipment, and I live in a house that is wired for very fast fiber-based internet. I've also been an intermittent tinkerer for a couple of decades, having launched a blog on a shared hosting provider back in 2002 and having taken that blog -- uh, *this blog* -- through a wide variety of redesigns, platform migrations, and hosting changes over the years. Much of that tinkering is [documented in the archives](https://kfitz.info/tags/tinkering/), including my 2023 move away from WordPress, first to Jekyll and then to Eleventy.
So I've had a long-standing desire to be more in control of my digital footprint, to ensure that I own as much of the work I do online as possible, and to live up to [the values that the Knowledge Commons team has developed](https://about.hcommons.org/about-us/), including experimenting with new modes of working and supporting the open exchange of knowledge and using open-source tools to do so. And the last year has made me all the more cognizant of the ways that trusting my digital past and presence to services that I cannot fully control -- that may be highly trustworthy today but whose leadership could change and whose guiding values could shift at any time -- opened up a range of potential risks.
On top of which, each time I've learned something new in the process of my tinkering, I've found myself wanting to know more. So I decided at some point this spring that I was going to invest in the hardware and the time required for me to set up a home network capable of allowing me to self-host the various sites and services I've had scattered around elsewhere.
What I didn't recognize when I started down this path was how little I knew about networking. I'd sort of self-hosted a pretty good range of sites and services on Digital Ocean (including migrating from Github to my own [Gitea](/gitea/) instance), and I'd gotten passably good at pretty basic Linux systems administration thanks to their amazing suite of [tutorials](https://www.digitalocean.com/community/tutorials?q=docker+ubuntu)[^1]. I knew how to obtain a domain name and how to configure its DNS records to point to a particular server. I could follow the documentation provided for the installation and use of packages on that server. But several things had never occurred to me, things as basic as how you make it possible for devices on a local, private network to be selectively and securely reachable from outside that network when desired. Or what is required to set up a fully functioning webserver when you're starting with bare metal.
It took several months and a bunch of frustration for me to get everything working, but if you're reading this post it's currently working well. I'm writing in an Obsidian vault that contains the content of my Eleventy-based site. When I'm done writing I'll use npm to build and index the site and git to push it to the Gitea instance on my home server. I'll then ssh into the container hosting my website and pull the updates in from Gitea. It's super simple when it's all working.
But when it's not, finding the right search terms to track down what could be wrong -- not to mention an unbefouled engine through which to do that search -- is really, really hard. And increasingly so when the results include posts made as long as 15 years ago about obsolete versions of the software you're asking about, on forums where n00bs are routinely yelled at for asking stupid questions and/or insulted for doing it wrong. And then there's the documentation that requires significant expertise to comprehend, and the "getting started" instructions that leave out key steps.
I got enormous help in sorting out some intractable issues from two key directions, though: prior blog posts here (see in particular [Networking](https://kfitz.info/networking/) and [Networking Continued](https://kfitz.info/networking-continued/)), which produced generous, thoughtful responses from several people (most notably the always amazing [Taylor Jadin](https://jadin.me) of Reclaim Hosting[^2]), and a series of Mastodon chats (most recently with the very kind and helpful [Monospace Mentor](https://floss.social/@monospace)[^3]). There's something to be said here about the ways that the human-to-human contact made possible by small networks and self-hosted open-source projects can allow for far better learning than can the aging content buried in vast piles of self-aggrandizing bloviation on major forums.
It's a point that should be obvious, except that we live at a time when a not insubstantial number of tech billionaires are trying to convince us that the future of education lies in AI rather than in human interactions and connections. Given the extent to which AI has already undermined our ability to find the information we need on the web, we would be well-served by spending more time thinking about how to reinforce the human networks that can support learning in the midst of entropic decline.
[^1]: The thing I most love about these tutorials is that they're written not as though you're just there to find the answer and get out, but as though you actually want to learn. That is, they don't just provide command after command, but rather walk you through what each command does and why you want to do it.
[^2]: I so, so admire his self-description as someone who is "passionate about educating and empowering people who want to make cool stuff on the web." I wish that there were more of that around and a lot fewer Reddit bros needing to display their dominance by trashing folks with less experience.
[^3]: Self-described "greybeard geek" who offers courses, support, and mentoring for folks seeking to build their DevOps skills -- as well as generous support for random folks on Mastodon asking "but how does the VM know that I'm asking it to be a webserver?"

View File

@@ -0,0 +1,50 @@
---
title: "Trust in Science: Accessibility, Persistence, and the Public Good"
date: 2025-10-27T08:10:03-04:00
permalink: /trust-in-science/
tags:
- commons
- presentation
---
![Title slide: Trust in Science: Accessibility, Persistence, and the Public Good](img/Slide1.jpeg)
*I had the privilege last week of speaking at the International Association of Universities conference, held at the University of Rwanda. It was a long and at moments difficult journey, but well worth it for the conversations that took place there. The conference theme was "Building Trust in Higher Education" -- a goal that has formed the basis for my last two books -- and I was invited to speak as part of a plenary panel focused on "Trust in Science," which enabled me to talk a bit about the work that we're doing at [Knowledge Commons](https://hcommons.org) to make our platform a trusted, nonprofit, community-governed partner for institutions of higher education around the world. My presentation is below; I'll look forward to continuing this discussion.*
![trust](img/Slide2.jpeg)
Im delighted to be here and to have this opportunity to talk a bit about trust in science. I want to start out by noting that "trust" is an awfully big word, especially as applied to higher education. For us to cultivate trust in the work we do in universities, we first have to demonstrate ourselves and our institutions worthy of that trust. Its not necessary for me to detail all of the ways that trust is being challenged today, but Ill note that some of these challenges derive from ongoing issues in the world around us, as misunderstandings of the motivations of scientists and ideological conflicts surrounding inconvenient research combine to produce widespread dismissals of the knowledge produced through scientific research, as well as growing concerns world-wide that politicians might interfere with scientific research or censor its results in highly damaging ways.
However, some of the challenges we face are of our institutions' own making. We might immediately think of the ongoing reproducibility crisis, or varying kinds of researcher malpractice that have created understandable concerns about the integrity of scientific work. But we must also consider the ways that many of our institutions have excluded the vast majority of the worlds populations from participating in the knowledge creation processes that form the heart of research. In the United States, I frequently hear scholars and administrators lament the fact that the general public does not understand the good that our faculty and our institutions do but its hardly surprising, when the public cannot see the work that we do, and therefore cannot understand our motivations for doing it or the ways that our work creates knowledge that supports healthy, sustainable communities. Restricting our work to exchanges among experts breeds distrust by keeping our reasoning and our results hidden from view.
![trust = accessibility + persistence](img/Slide3.jpeg)
I want to argue today that building trust in science today has two major prerequisites: accessibility and persistence. When I talk about accessibility, I mean in part to point to open access, which attempts to ensure that the results of research can be found by anyone. But I also mean that research needs to be accessible in another sense, in adopting a register of communication that can be broadly understood, ensuring that the work can not just be downloaded but read and engaged with. There are of course valid reasons that researchers use a professional vocabulary with one another, but that vocabulary often prohibits real engagement on the part of many of the publics that our institutions serve, publics who might be interested in what our institutions do if they were invited in. Many of our institutions and our funding bodies strongly encourage researchers to engage with broader audiences, but we need to ensure that doing so is integrated into our institutional reward structures, and that the work of translating advanced research for broad consumption is recognized as real work. If universities encourage and reward broader impacts by supporting researchers in making more of their work its processes as well as its results fully accessible, we will have the opportunity to cultivate public trust by building a richer understanding of what it is that researchers do, and why they do it.
At the same time, we need to think about the persistence of the work that researchers do: not only does research need to be made accessible, but it needs to *remain* accessible, even in the face of the significant challenges to science that many of our institutions are facing in the current political moment. Researchers on our campuses are investigating all manner of inconvenient questions about climate change, about global inequities, about the history of colonialism and the forms of oppression that it has created and much of this research is at risk of disappearing. Some of this risk comes from direct censorship, as we have seen governments demanding the removal of work that it doesnt like from journals, websites, and databases and defunding the research that makes that work possible. Some of the risk comes from shifting corporate priorities, as the for-profit companies that still control most of the scholarly communication infrastructure have goals and motivations and requirements that are often very different from those of our institutions. Ensuring that todays research results remain available to be built on tomorrow will require all of our institutions to think seriously about the infrastructure on which their researchers work is hosted: who owns and operates that infrastructure, and to what ends. Is the most important goal of the infrastructure's owners sharing knowledge toward the creation of a better world, or is it returning value to shareholders?
![the words 'profit / nonprofit' are struck through and replaced with 'values alignment'](img/Slide4.jpeg)
That's a pretty crude distinction to draw. I'm sure that all of us know of nonprofit organizations that operate as extractively as many profit-driven companies, as well as corporations that operate with a clear sense of their responsibility to the public good. But it is essential -- and especially right now -- for institutions of higher education to insist that the partner organizations to which they entrust the knowledge they produce have goals and priorities that align with their own. This is true not least because of the non-reciprocal material relations between our institutions and too many of the infrastructure providers on which we rely: our researchers and our institutions freely give them the gift of our work, our labor, our time and attention, and in return they charge us, over and over again. When they can't charge us to access the work, they charge us to publish the work that we have done, and they charge us to access the data they have harvested about that work.
There are alternative models for scholarly and scientific communication that can help researchers make their work both more accessible and more persistent, however. These alternatives include publishing cooperatives, open repositories, and more. I want in the time I have remaining to tell you a bit about the project that I've had the privilege of working to build over the last ten years: Knowledge Commons.
![Knowledge Commons logo and URL](img/Slide5.jpeg)
[Knowledge Commons](https://hcommons.org) is an open-access, community-governed, nonprofit network hosted by Michigan State University, on which knowledge creators across the disciplines and around the world can deposit and share their work, build new collaborations, and create a vibrant digital presence for themselves, their teams, and their projects. Knowledge Commons is guided by the FAIR principles for open science, ensuring that the products of research entrusted to us are made findable, accessible, interoperable, and reusable, and is committed to living out the Principles for Open Scholarly Infrastructure.
![screenshot and datapoints on KCWorks](img/Slide6.jpeg)
The Commons brings together a next-generation repository, [KCWorks](https://works.hcommons.org), which is built on InvenioRDM, with a robust researcher profile system and a suite of WordPress-based publishing and communication tools. The Commons hosts nearly 60,000 researchers, instructors, practitioners, and students who are sharing and preserving their work. KCWorks registers DOIs via DataCite for every deposit and then versions those DOIs as works are updated, and it offers a very wide range of contributor roles, licenses, and subject headings that enable our metadata to serve nearly any purpose.
![screenshot of KCWorks Search](img/Slide7.jpeg)
KCWorks is highly interoperable, thanks to its strong REST API that connects with all repository operations and its built-in OAI-PMH server, allowing the repository's metadata to be readily consumed by a range of open services across the web, dramatically increasing the discoverability of the work researchers deposit with us. Upon deposit, that work is automatically pushed both to the contributor's profile on the Commons and to their ORCID record, and it can also be shared to various social media platforms.
![persistence](img/Slide8.jpeg)
The Commons has to this point focused on creating greater accessibility for the products and processes of research, but if we are to succeed in transforming the global research ecosystem into one that is worthy of the public trust, however, we must face two key challenges. The first has to do with persistence. Though the project and its team are hosted by Michigan State University, the technical infrastructure we use to support the project is not; the university's computing and data infrastructures are not currently able to support our work. Instead, we are hosted on Amazon Web Services -- and as we found out yesterday, as robust as AWS is, it isn't immune from major technical failures. On top of which, AWS has become a massive consumer of university resources, as well as being part of a corporation that has not proven itself to have the public good as a primary driver. One might begin to wonder what could be possible if a collective of institutions were to come together and put the resources they spend in Silicon Valley toward developing academy-owned shared infrastructure, allowing higher education to take greater control of its own technological future. And what might become possible if that network of institutions were truly global, enabling the research that is developed and made available in one area of the world to be mirrored all over the world, allowing science to evade censorship wherever it might surface?
The Knowledge Commons team submitted a pre-proposal describing the first steps for such a network earlier this year to the [Trust in American Institutions Challenge](https://works.hcommons.org/records/xd3c5-g7j14) hosted by Lever for Change, and while we did not advance to the final round of consideration, the group of collaborating organizations that signed on to pursue this project -- including the Association of University Presses, the Association of Research Libraries, Jisc, the Open Access Scholarly Publishers Association, OAPEN, and more -- are still interested in pressing forward with it. We'll be meeting later this month to discuss our next steps.
![sustainability](img/Slide9.jpeg)
But key among those next steps is of course finding the resources to accomplish something so enormous, especially at a time in which so many of our institutions are facing austerity measures. Which points to the second challenge for Knowledge Commons in becoming a research platform worthy of the public trust: financial sustainability. We are committed to keeping the Commons free and open for any individual user to join the network, create a profile, share their work, and participate in the collaborations we make possible. In order to do so, we need universities and other research organizations to join the Commons consortium, investing their resources in a community-governed alternative that can make open science genuinely open to all. The future of the Commons depends on the will of that collective, in more ways than one.

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB